Bernstein Conference - Bernstein Netzwerk Computational ...

29 downloads 628 Views 12MB Size Report
Sep 15, 2015 - Welcome and Presentation of Bernstein Award 2015 by Matthias Kölbel ..... sequence 'templates' as previo
Bernstein Conference

Heidelberg 2014 / Mannheim Göttingen 2015

Abstract Book

September 15 - September 17 Workshop Booklet Heidelberg September 2 - 3 Göttingen

Bernstein Conference 2015 September 15 – 17, 2015

Ruprecht-Karls-Universität Heidelberg Neue Universität, Grabengasse 3-5, Universitätsplatz

Program and Abstracts

Local Organizers Peter Bastian Andreas Draguhn Daniel Durstewitz Martin Gerchen Joachim Hass Elke Jochum Peter Kirsch Andreas Meyer-Lindenberg Christoph Schuster Simone Seeger Co-Organizers Bernstein Coordination Site (BCOS): Andrea Huber Brösamle Kerstin Schwarzwälder (Exhibitor Coordinator) Mareike Kardinal (Public Relations) PhD Student Event Organizers Martin Gerchen Joachim Hass Abstract Handling G-Node Administrative Support Felicitas Hirsch Elke Jochum Christine Roggenkamp Ellen Schmucker Simone Seeger Registration company K.I.T. Group GmbH Dresden Münzgasse 2 01067 Dresden, Germany

Workshop Program Committee Matthias Bethge (Bernstein Center Tübingen) Upinder Bhalla (NCBS, Bangalore) Carlos Brody (Princeton University) Gustavo Deco (University Pompeu Fabra, Barcelona) Andreas Draguhn (Bernstein Center Heidelberg-Mannheim) Daniel Durstewitz (Bernstein Center Heidelberg-Mannheim) Gaute Einevoll (Norwegian University of Life Sciences, Aas) Andreas Herz (Bernstein Center Munich) Peter Kirsch (Bernstein Center Heidelberg-Mannheim) Sara Solla (Northwestern University, Evanston)

Contents

Schedule Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PhD Symposium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Conference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 2

Conference Information Conference venue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Special Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 6 11

Invited Talks

17

Contributed Talks

25

Posters Tuesday

33

Posters Wednesday

129

Index 225 Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

iii

Contents

Funding The conference is mainly funded by the German Federal Ministry of Education and Research (BMBF) and the German Research Foundation (DFG).

Participating Institutions Bernstein Center for Computational Neuroscience Heidelberg / Mannheim Central Institute of Mental Health, Mannheim Ruprecht-Karls-Universität Heidelberg

Exhibitors Multichannel Systems MCS GmbH npi electronic GmbH Springer Verlag GmbH Biozym Scientific GmbH Frontiers NVIDIA GmbH The MIT Press

Sponsors NVIDIA GmbH neuroConn GmbH

iv

.

Schedule

Workshops The Bernstein conference features a series of Satellite Workshops on September 14. The goal is to provide a forum for the discussion of topical research questions and challenges. Details of the individual workshops are available at the conference webpage (www.bernsteinconference.de) WS1 WS2

WS3 WS4 WS5 WS6 WS7 WS8 WS9 WS10 WS11 WS12

WS13

What does the eye tell the brain – and why (Philipp Berens, Tom Baden) Reproducibility in the neurosciences: Workflows, provenance tracking and data sharing (Michael Denker, Sonja Grün, Thomas Wachtler) Neural mechanisms linking visual perception to spatial cognition (Fred Hamker, Pieter Medendorp) canceled Computational biophysical psychiatry: From genes to systems (Gaute T. Einevoll, Daniel Durstewitz, Ole A. Andreassen) Estimating parameters and unobserved state variables from neural data (Hazem Toutounji, Nirag Kadakia) How do time and sensory information interact in perceptual decision making? (Arash Fassihi, Mathew E. Diamond) In vitro neuronal networks from 2D to 3D (Jari Hyttinen, Jarno Tanskanen) Multiple shades of investigating the (im)balanced brain (Annette B. Brühl, Annemieke M. Apergis-Schoute) Replicability and reproducibility of neural network simulations (David Dahmen, Hans Ekkehard Plesser, Susanne Kunkel) Dynamic retinal coding: Mechanisms and consequences (Thomas Münch) From retina to robots – connecting the neural computations of early vision to neuromorphic engineering and artificial vision (Tim Gollisch, Stefano Panzeril) Ready, set, go - Anticipation, timing and action in spiking neural networks (Christoph Richter, Florian Röhrbein, Jörg Conradt)

1

Schedule

PhD Symposium Thursday, September 17 15:00 h 20:00 h

First meeting and discussion in an informal atmosphere Socializing and dinner in Heidelberg’s old town

Friday, September 18 09:00 h

10:30 12:30 13:30 16:00

h h h h

Mattia Rigotti (New York, USA) Functional characterization and measure of the dimensionality of neural responses in cognitive tasks Interactive Session I Lunch Break Interactive Session II Closing Remarks

Main Conference Tuesday, September 15 OPENING SESSION 09:00 h

Welcome by Bernhard Eitel, President of Heidelberg University Welcome by Andreas Herz, Speaker of the Bernstein Network Computational Neuroscience

2

09:20 h

Presentation of the Valentino Braitenberg Award by Ad Aertsen Lecture by Valentino Braitenberg Award Winner Alexander Borst (MPI of Neurobiology, Martinsried, Germany)

10:15 h

Coffee Break

10:45 h

Welcome and Presentation of Bernstein Award 2015 by Matthias Kölbel (Federal Ministry of Education and Research)

11:00 h

Lecture by Bernstein Award Winner

12:00 h

Lunch Break

. COMPUTATIONAL NEUROSCIENCE OF PSYCHIATRIC AND NEUROLOGICAL CONDITIONS (Chair: Andreas Meyer-Lindenberg) 14:00 h

Ray Dolan (University College London, UK) The neural architecture of decision making and its regulation by dopamine

14:45 h

Martin F. Gerchen (CIMH Mannheim, Germany) Modelling whole-brain psychophysiological interactions: further insights into taskdependent functional magnetic resonance imaging data in psychiatry

15:00 h

Edward T. Bullmore (University of Cambridge, UK) Connectomics: graph theoretical analysis of brain networks

15:45 h

Coffee Break

16:15 h

Borislav Antic (University of Heidelberg, Germany) Evaluating stroke recovery by structural decomposition of motor kinematics

16:30 h

Klaas Enno Stephan (University of Zurich & ETH Zurich, Switzerland) Translational neuromodeling

17:15 h

POSTER SESSION I

20:00 h

Public Lecture: Christian Büchel (University Medical Center, Hamburg) Pain and pain modulation: from spinal to cortical processing

Wednesday, September 16 NOVEL APPROACHES TO DATA ANALYSIS IN NEUROPHYSIOLOGY AND NEUROIMAGING (Chair: Daniel Durstewitz, Sonja Grün, Stefan Rotter) 09:00 h

Emery Brown (MIT, Massachusetts, USA) Deciphering the Dynamics of the Unconscious Brain Under General Anesthesia

09:45 h

Jakob Macke (caesar Bonn & BCCN Tübingen, Germany) Correlations and signatures of criticality in neural population models

10:00 h

Jonathan Pillow (University of Texas, Austin, USA) Unlocking single-trial dynamics of spike trains in parietal cortex during decision-making

10:45 h

Coffee Break

11:15 h

Nirag Kadakia (University of California, San Diego, USA) State and parameter estimation in neurons of the song production pathway in zebra finch songbirds

3

Schedule INFORMATION PROCESSING IN PREFRONTAL HIPPOCAMPAL NETWORKS (Chair: Andreas Draguhn) 11:30 h

Ila Fiete (University of Texas, Austin, USA) Coding and dynamics in the hippocampal formation

12:15 h

Lunch Break

14:00 h

Loren Frank (University of California, San Francisco, USA) Neural substrates of memories and decisions

14:45 h

Christian Leibold (LMU Munich, Germany) Optimal networks for integrating two input spaces

15:00 h

Francesco Paolo Battaglia (Donders Centre for Neuroscience, Nijmegen, The Netherlands) Cortex-wide neuronal sequences, and their hippocampal dependent replay

15:45 h

Coffee Break

16:15 h

Alvaro Tejero-Cantero (University of Oxford, UK) Automatic discovery of brain states from multivariate LFPs during appetitive behavior

16:30 h

Keynote Lecture Karl Deisseroth (Stanford University, USA) Optical tools for studying intact biological systems

17:30 h

POSTER SESSION II

20:00 h

Conference Dinner (Kulturbrauerei Heidelberg)

Thursday, September 17 GENES AND NEURAL NETWORK (DYS-)FUNCTION (Chair: Peter Kirsch)

4

09:00 h

Ofer Yizhar (Weizmann Institute of Science, Rehovot, Israel) Understanding the roles of prefrontal long-range connections through targeted optogenetic perturbation

09:45 h

Manish K. Asthana (Mackenzie Presbyterian University, Sao Paulo, Brazil) The brain-derived neurotrophic factor Val66Met polymorphism moderates reconsolidation of fear memory in humans?

10:00 h

Coffee Break

10:30 h

Torfi Sigurdsson (University of Frankfurt, Germany) Neural network dysfunction in animal models of schizophrenia

. 11:15 h

Emmanuel Schwarz (CIMH Mannheim, Germany) Mapping psychiatric risk genes to biological function through protein interaction network

11:30 h

Presentation of the Brains4Brains Award 2015 and Lecture by the Awardee Lea Ankri (Hebrew University Jerusalem, Israel) Possible modulation of sensory input by a novel feedback from the cerebellar nuclei to the cerebellar cortex

12:00 h

Closing Remarks

5

Conference Information Conference venue Main conference, Workshops and Public Lecture: Neue Universität, Grabengasse 3-5, Universitätsplatz How to get to

Opening hours registration desk Monday, Sept 14 Tuesday, Sept 15 Wednesday, Sept. 16 Thursday, Sept 17 6

08:00 08:00 08:00 08:00

-

18:30 18:30 18:30 13:00

h h h h

.

Floor Plan Neue Universität (ground floor)

7

Conference Information

Floor Plan Neue Universität (first floor)

8

.

Floor Plan Neue Universität (second floor)

Poster presentations Session I Tuesday, September 15 17:15 - 19:00 h Session II Wednesday, September 16 17:30 - 19:00 h Poster boards are numbered according to the abstract numbers in these proceedings (T indicates the first poster session on Tuesday and W the second poster session on Wednesday). Pins for putting up posters will be provided. Posters can be mounted starting at 13:00 h on the day of the respective poster session. Please take your poster down before 12:00 h the next day. The conference staff will remove all posters that are not taken down after the poster sessions. Posters that are not picked up at the registration desk by Thursday, September 17, 16:30 h will be disposed of.

Abstracts Conference abstracts including high-resolution versions of figures will be published online at http://www.g-node.org/abstracts/bc15.

9

Conference Information

Internet Wireless Web Access will be provided free of charge. How to connect: • Choose the SSID ‚UNI-WEBACCESS‘. • Leave all other parameters as default, including the security mechanism as open/noencryption. • You will be assigned a TCP/IP address from our DHCP server, provided your network settings are set to ‚Obtain an IP address automatically‘ (which is the default). • Finally connect to the internet through a Web Browser (e.g. Internet Explorer, Firefox, Opera, Safari or other) and just call one site as you like. • You will be redirected to our login site (pop-up and javascript have to be enabled for this site). • If you receive an SSL Certificate error message, you should update your browser or install the root certificate “Deutsche Telekom Root CA 2“ (you will find the download link on our login site). • After logging in with the User ID and password provided below, you can use all the ports and protocols which we usually offer for a „conference network“. Web-Authentication User-ID rl2 Password bsc#2015 Important note: No security measures (i.e. encryption, firewalling, etc.) have been enabled. We highly recommend that you take all appropriate steps: e.g. do not share personal or system files, and use a personal firewall. Please keep in mind that there is a possibility of unencrypted communication being intercepted by others on the Hot Spot unless protocols such as SSL (HTTPS) or a VPN are used to protect information and passwords.

Bernstein Conference Dinner The Bernstein Conference dinner takes place in the Heidelberger Kulturbrauerei, Leyergasse 6, which is 15 minutes walking distance from the Central Lecture Hall. The dinner starts on Wednesday, Sept 16, at 20:00h.

Name tags Official name tags will be required for admission to all conference events. Participants who lose their name tags will have to pay a fee of EUR 10 to retain a replacement tag.

Wardrobe Storage space for wardrobe and luggage will be provided in a separate room. Please ask at the registration desk. The organizer assumes no liability for lost valuables of the wardrobe at the venue.

Conference coordination Bernstein Center for Computational Neuroscience Heidelberg / Mannheim Central Institute for Mental Health J5 68159 Mannheim 10

Ruprecht-Karls-Universität Heidelberg Im Neuenheimer Feld 326 69120 Heidelberg

.

Special Events Bernstein Award for Computational Neuroscience 2015 Since 2006, the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) annually confers the Bernstein Award for Computational Neuroscience to one excellent junior researcher with outstanding ideas for new research projects. The award is endowed with up to 1.25 million e over the course of five years and is one of the most highly remunerated research awards for young scientists in Germany. With this funding, the awardees can establish their own, independent research group at a research institution of their choice in Germany. The Bernstein Award will be conferred by RD Dr. Matthias Kölbel, Head of Unit 614 (Development of Methods and Structures in the Life Sciences) of the German Federal Ministry of Education and Research (BMBF), during the opening session of the Bernstein Conference 2015, on September 15. Following the award ceremony, the awardee will present his/her current research and future projects to be conducted with the support of the award.

Valentino Braitenberg Award for Computational Neuroscience The award is named after Prof. Dr. Valentino Braitenberg (1926, Bozen – 2011, Tübingen), one of the founding directors of the Max Planck Institute for Biological Cybernetics. With his novel research approach that combined anatomy, physiology, and theory, Valentino Braitenberg was a pioneer of the modern research discipline Computational Neuroscience and has significantly contributed to the development of biological cybernetics, which has in turn inspired robotics and artificial intelligence. On the occasion of her father’s 65th birthday, Carla Braitenberg, together with Max Gulin, created a Golden Neuron to be awarded in subsequent years as a challenge trophy for outstanding scientific achievements. Since 2012, this tradition has been continued in a modified form. With financial support of the “Autonome Provinz Bozen Südtirol” the Valentino Braitenberg Award is now biannually conferred within the framework of the Bernstein Conference. The 2014 Prize Giving Ceremony, including the talk of the Awardee Axel Borst (Max Planck Institute of Neurobiology, Martinsried, Germany), had to be rescheduled. It will take place during this year’s Bernstein Conference on Tuesday, September 15, 2015 at 9:20 a.m. The award will be presented by Ad Aertsen (Bernstein Center and University of Freiburg).

Brains for Brains Award 2015 The Brains for Brains Award is an initiative of the Bernstein Association for Computational Neuroscience. This year’s award is kindly supported by npi electronic GmbH, Tamm and NVIDIA GmbH, Munich. Since 2010, the Brains for Brains Award honors outstanding young international scientists who achieved a peer-reviewed scientific publication before starting their doctoral thesis. It consists of a 500 e cash prize and a travel fellowship of up to 2,000 e covering their trip to Germany, participation in the Bernstein Conference and two individually planned visits to selected Computational Neuroscience labs in Germany. The 2015 awardee will present a poster during the poster session on Wednesday, September 16. Additionally, she will give a short talk during the award ceremony on Thursday, September 17. This year’s award will go to Lea Ankri (Hebrew University of Jerusalem, Israel). 11

Conference Information

PhD Symposium Venue: Psychologisches Institut, Hauptstraße 47-51, 69117 Heidelberg Thursday, September 17 15:00 h First meeting and discussion in an informal atmosphere 20:00 h Socializing and dinner in Heidelberg’s old town Friday, September 18 09:00 h Mattia Rigotti (New York, USA) Functional characterization and measure of the dimensionality of neural responses in cognitive tasks Single-neuron activity in prefrontal cortex (PFC) is characterized by striking complexity: In animals engaged in cognitive behavior, PFC neurons are reliably but idiosyncratically tuned to mixtures of multiple task-related aspects. I will propose that such „mixed selectivity“ at the level of individual cells is a signature of high-dimensionality at the level of the population activity, whose computational role is consistent with PFC‘s importance for cognitive flexibility. This hypothesis is supported by computational models that reveal an impressive advantage of high-dimensional activity patterns over specialized low-dimensional representations in terms of the larger repertoire of downstream response functions that they can accommodate. Moreover, I will show empirical evidence that the dimensionality of the neural activity patterns is predictive of animal behavior, as it collapses in error trials in a working memory task. Having established the importance of high-dimensional neural representations, I will propose a noninvasive method that exploits repetition suppression to estimate the dimensionality of neural activity patterns, even in the case where response properties are highly heterogeneous and not anatomically organized. 10:30 12:30 13:30 16:00

12

h h h h

Interactive Session I Lunch Break Interactive Session II Closing Remarks PHD Symposium

.

13

Conference Information

Public Lecture Venue: Neue Universität, Grabengasse 3-5, Universitätsplatz, Hörsaal 13 Tuesday, September 16 20:00 h Public Lecture (in German) Pain and pain modulation: from spinal to cortical processing Christian Büchel, Institut für Systemische Neurowissenschaften, Universitätsklinikum Hamburg-Eppendorf, Germany This presentation will focus on the neurobiological mechanisms of pain perception investigated in humans using functional magnetic resonance imaging (fMRI). After reviewing early studies on basic pain properties, the talk will focus on novel insights in learning mechanisms of pain and novel imaging techniques which allow the visualizing of pain processing at the level of the spinal cord in humans. In addition, the presentation will cover the neuronal mechanisms of how expectation can shape pain perception. This will be illustrated through experiments on placebo analgesia, a prominent example of how cognition can modulate pain perception. As placebo analgesia has been shown to be mediated by endorphins, a study using the opioid antagonist (naloxone) will also be presented. Using high resolution fMRI of the human spinal cord could further show that decreased pain responses under placebo were paralleled by strongly reduced pain-related activity in the spinal cord under placebo, providing direct evidence for spinal inhibition as one mechanism of placebo analgesia. The converse, namely increased activation in the opposite direction (i.e. nocebo effect) will also be presented. Finally, the talk will end with presenting a recently published new imaging method that is capable of assessing the brain and the spinal cord at the same time. This, for the first time, allows to assess interregional coupling between brain and spinal cord which will be helpful in studying how cognition can affect pain perception, but will also help to unravel the mechanisms of chronic pain.

14

.

Bernstein Association for Computational Neuroscience The Bernstein Association for Computational Neuroscience supports science, research, and education in Computational Neuroscience and explains research topics and findings to the general public. The Bernstein Association was founded in 2009 by members of the Bernstein Network and is recognized as a non-profit organization. Everyone who is active in the field of Computational Neuroscience or related subjects can become a member of the Bernstein Association. More information can be found on the website (www.nncn.de/en/bernstein-association) or at the Bernstein Network Information Booth during the Bernstein Conference. Events at the Bernstein Network Information Booth: Tuesday, September 15 12:00 h Bernstein Sofa: Lunch Event - Meet the Braitenberg Awardee Alexander Borst (Event for PhD students, registration required) 17:30 h Welcome Event for all members of the Bernstein Association for Computational Neuroscience∗ and the Bernstein Conference attendees. Vernissage of the Bernstein Calendar 2016 exhibition. During the whole conference Come & Play: check out a hands-on neuron game of the Bernstein Center Freiburg and experience the interaction of neurons in a neuronal network. ∗

For further information on the Bernstein Association, visit us at the Bernstein Network Information Booth

15

Conference Information

16

.

Invited Talks

17

Invited Talks

[I 1] The neural architecture of decision making and its regulation by dopamine R J Dolan1 1. Welcome Trust Centre for Neuroimaging, University College London doi: 10.12751/nncn.bc2015.0001

Decision making under risk lies at the confluence of a number of disciplines, including economics, psychology and neuroscience. There is a growing knowledge base regarding the neural underpinnings of distinct types of decision making processes, yet the precise role of one major ingredient, the neuromodulator dopamine, is unclear. While dopamine’s role in mediating reward based learning is well described, including within a rich theoretical perspective, much less is known regarding its wider role in shaping decisions. Here I will focus entirely on human based studies, where ageing will serve as a naturalistic proxy for dopamine loss, that have begun to reveal some surprising findings in relation to its effects, including how it impacts on subjective hedonic states.

[I 2]

Connectomics: graph theoretical analysis of brain networks

Ed Bullmore1 1. University of Cambridge and GlaxoSmithKline doi: 10.12751/nncn.bc2015.0002

There has been growing interest in understanding the network organization of the human brain, also known as the connectome. By graph theoretical analysis of connectivity matrices derived from magnetic resonance imaging (MRI) it has been shown that human brain networks have a complex topology, characterised by small-worldness, modularity, and the existence of hub nodes and rich clubs. Human brains are also parsimoniously 3 wired2 , with a strong bias towards short physical distances between connected regions, but wiring cost is not strictly minimized. It is suggested that brain networks have been selected by a trade-off between competitive pressures to minimize biological cost and to maximise cognitively valuable topological integration. High cost /high value network components, like connector hubs, are preferentially implicated by diverse clinical brain disorders. We show that analogous principles apply to brain networks of the nematode worm C elegans, and the mouse, and that new technologies, such as high-resolution micro-electrode arrays, will enable a more detailed analysis of the biological mechanisms driving network topology and development. It seems that network science is in a strong position to understand more completely both the nearly-universal principles and the specific biological details of the connectome.

18

[I 3]

Translational Neuromodeling

1. Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich & ETH Zurich doi: 10.12751/nncn.bc2015.0003

For many brain diseases, particularly in psychiatry, we lack objective diagnostic tests and cannot predict optimal treatment for individual patients. This presentation outlines a translational neuromodeling framework which aims at establishing “computational assays” for inferring subject-specific mechanisms of brain disease from non-invasive measures of behaviour and neuronal activity. Based on a generative modelling approach, such assays may provide a formal basis for differential diagnosis and treatment predictions in individual patients and, eventually, facilitate the construction of pathophysiologically grounded disease classifications. The framework presented emphasises the importance of prospective validation studies in patients and of concrete clinical problems for providing benchmarks for model validation. I will show some early (and very simple) proof of concept studies in patients, and outline the opportunities and challenges that lie ahead.

[I 4]

Pain and pain modulation: from spinal to cortical processing

Christian Büchel1 1. Department of Systems Neuroscience, Hamburg, Germany doi: 10.12751/nncn.bc2015.0004

This presentation will focus on the neurobiological mechanisms of pain perception investigated in humans using functional magnetic resonance imaging (fMRI). After reviewing early studies on basic pain properties, the talk will focus on novel insights in learning mechanisms of pain and novel imaging techniques which allow the visualizing of pain processing at the level of the spinal cord in humans. In addition, the presentation will cover the neuronal mechanisms of how expectation can shape pain perception. This will be illustrated through experiments on placebo analgesia, a prominent example of how cognition can modulate pain perception. As placebo analgesia has been shown to be mediated by endorphins, a study using the opioid antagonist (naloxone) will also be presented. Using high resolution fMRI of the human spinal cord could further show that decreased pain responses under placebo were paralleled by strongly reduced pain-related activity in the spinal cord under placebo, providing direct evidence for spinal inhibition as one mechanism of placebo analgesia. The converse, namely increased activation in the opposite direction (i.e. nocebo effect) will also be presented. Finally, the talk will end with presenting a recently published new imaging method that is capable of assessing the brain and the spinal cord at the same time. This, for the first time, allows to assess interregional coupling between brain and spinal cord which will be helpful in studying how cognition can affect pain perception, but will also help to unravel the mechanisms of chronic pain.

19

.

Klaas Enno Stephan1

Invited Talks

[I 5] Deciphering the Dynamics of the Unconscious Brain Under General Anesthesia Emery N Brown1 1. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA doi: 10.12751/nncn.bc2015.0005

General anesthesia is a drug-induced, reversible condition comprised of five behavioral states: unconsciousness, amnesia (loss of memory), analgesia (loss of pain sensation), akinesia (immobility), and hemodynamic stability with control of the stress response. The mechanisms by which anesthetic drugs induce the state of general anesthesia are considered one of the biggest mysteries of modern medicine. Using electrophysiological recordings, mathematical modeling and neural signal processing techniques, we study four problems to decipher this mystery. First, we present findings from our human studies of general anesthesia using high-density EEG and intracranial recordings which have allowed us to give a detailed characterization of the neurophysiology of loss and recovery of consciousness due to the standard anesthetics. Second, we show how the response to anesthesia changes as a function of age. Third, we present a neuro-metabolic model of burst suppression, the profound state of brain inactivation seen in deep states of general anesthesia. We show that our characterization of burst suppression can be used to design a closed-loop anesthesia delivery system for control of a medically-induced coma. Finally, we demonstrate that the state of general anesthesia can be rapidly reversed by activating specific brain circuits. Our results show that it is now possible to have a detailed neurophysiological understanding of the brain under general anesthesia, and that this understanding can be used to precisely monitor and control anesthetic states. Hence, general anesthesia is not a mystery.

[I 6] Unlocking single-trial dynamics of spike trains in parietal cortex during decision-making Jonathan Pillow1 1. Psychology & Neurobiology, Center for Perceptual Systems, The University of Texas at Austin doi: 10.12751/nncn.bc2015.0006

Neural firing rates in the macaque lateral intraparietal (LIP) cortex exhibit "ramping" firing rates, a phenomenon that is commonly believed to reflect the accumulation of sensory evidence during decision-making. However, ramping that appears in trialaveraged responses does not necessarily imply spike rate ramps on single trials; a ramping average could also arise from instantaneous steps that occur at different times on different trials. In this talk, I will describe an approach to this problem based on explicit statistical latent-dynamical models of spike trains. We analyzed LIP spike responses using spike train models with: (1) ramping "accumulation-to-bound" dynamics; and (2) discrete "stepping" or "switching" dynamics. Surprisingly, we found that roughly three quarters of choice-selective neurons in LIP are better explained by a model with stepping dynamics. We show that the stepping model provides an accurate description of LIP spike trains, allows for accurate decoding of decisions, and reveals latent structure that is hidden by conventional stimulus-aligned analyses. 20

[I 7]

Neural substrates of memories and decisions

Loren M Frank1

.

1. Department of Physiology, UCSF Center for Integrative Neuroscience, San Francisco doi: 10.12751/nncn.bc2015.0007

Hippocampal-prefrontal cortex interactions are critical for learning and memory, but the nature of these interactions remains unclear. In this talk I will first present past and ongoing work implicating hippocampal memory replay during sharp-wave ripple events as a critical contributor to learning and memory processes. Our findings indicate that these events are important behaviorally and engage highly specific functional ensembles that underlie hippocampal-prefrontal interactions. At the same time, technological and algorithmic constraints limit our current understanding of these and related brain events. In the second part of the talk I will present results from ongoing work where we are developing new recording technology, cluster-less decoding algorithms and real-time feedback systems that will make it possible to measure larger-scale patterns of neural activity across structures and to use the millisecond-timescale content of those patterns to perform targeted manipulations. These new approaches should make it possible to understand the role of specific patterns of brain activity in cognitive processes.

[I 8] Cortex-wide neuronal sequences, and their hippocampal dependent replay Francesco Battaglia1 1. Donders Centre for Neuroscience„ Nijmegen, NL doi: 10.12751/nncn.bc2015.0008

According to systems consolidation theory, the hippocampus is key for the formation of new memories. Those memories are then slowly embedded in a larger, neocortical store, where they are re-organized, and integrated in a more semantic-like memory corpus. Following Donald Hebb’s idea, information in the brain is encoded under the form of cell assemblies, neuronal groups that may activate simultaneously, or in sequence. Synaptic plasticity engraves those assemblies and sequences in the synaptic matrix, so that they become part of the network’s repertoire of dynamical states, so that once, encoded, they may emerge spontaneously in the network activity. Spontaneous sequence activation of previously encoded sequences is observed during sleep, where this ‘replay’ may support the systems consolidation process, that the hippocampus-driven integration of memories into neocortex. Here, we study neuronal sequences as they emerge, during active behavior and during sleep, in large neuronal ensembles spanning many cortical areas. In the active portion of the recordings, rats were running on a familiar track. We have developed a novel method, based on multi-variate autoregressive models (MVAR), to characterize the sequential structure in ensemble activity, even in the absence of sequence ‘templates’ as previously used. With this method, we can detect neuronal sequences, spanning several brain areas, and their reactivation during sleep. Such replay was eminently related to hippocampal sharp waves, and involved preferentially neurons that have a strong functional link with the hippocampus. Interestingly, sequences are replayed at a faster rate in the sleep after than in the sleep before. These results suggest that spontaneous activity of single neurons is organized at the whole-cortex scale, and the hippocampus plays a major role in orchestrating it, supporting a basic tenet of memory consolidation theory. 21

Invited Talks

[I 9] Understanding the roles of prefrontal long-range connections through targeted optogenetic perturbation Ofer Yizhar1 1. Department of Neurobiology, Weizmann Institute of Science, Rehovot doi: 10.12751/nncn.bc2015.0009

Fear-related disorders are thought to reflect strong and persistent learned fear associations resulting from aberrant synaptic plasticity mechanisms. The basolateral amygdala (BLA) and the medial prefrontal cortex (mPFC) play a key role in the acquisition and extinction of fear memories. Strong reciprocal synaptic connections between these two regions are believed to play a role in the encoding of fear memories, but the contribution of these projection pathways to memory formation and maintenance remains elusive. We have established an optogenetic stimulation protocol for manipulating the synaptic strength of the BLA to mPFC projection. With this approach, we explored the role of this pathway in fear learning. Using acute slice electrophysiology and extracellular single-unit recordings in behaving mice, we found that optogenetic high frequency stimulation (oHFS) of BLA projections to the mPFC induced reversible long-term depotentiation of synaptic strength without altering the intrinsic properties of mPFC cells or their spontaneous firing rates. Using this protocol in behaving mice, we found that depotentiation of BLA-mPFC synapses prior to conditioning leads to impaired fear learning. In mice that have already acquired the cued fear association, depotentiation of BLA-mPFC inputs prior to extinction training facilitated the extinction process. Our findings suggest a new role for the BLA-mPFC pathway not only in the in the acquisition but also the maintenance of learned associations and provide a framework for functional analysis of long-range projections.

[I 10]

Neural network dysfunction in animal models of schizophrenia

Torfi Sigurdsson1 1. Institute of Neurophysiology, Goethe University, Frankfurt doi: 10.12751/nncn.bc2015.0010

Neural network dysfunction lies at the core of many neuropsychiatric disorders such as schizophrenia and autism. Yet how this dysfunction manifests itself at the cellular level is poorly understood. Animal models of neuropsychiatric diseases, many of which model known genetic risk factors, can help to clarify this issue by allowing detailed analysis of neural network dysfunction and its contribution to behavioral abnormalities. In this talk, I will describe work from our own group that has examined how disturbances in long-range functional connectivity in animal models of schizophrenia might contribute to sensory and cognitive deficits. The results are relevant not only for understanding neural network dysfunction in disease states but also how neural networks mediate cognition and behavior in the healthy brain.

22

[I 11] Functional characterization and measure of the dimensionality of neural responses in cognitive tasks

.

Mattia Rigotti1 1. Columbia University and IBM T.J. Watson Research Center doi: 10.12751/nncn.bc2015.0011

Single-neuron activity in Prefrontal cortex (PFC) is characterized by striking complexity: in animals engaged in cognitive behavior, PFC neurons are reliably but idiosyncratically tuned to mixtures of multiple task-related aspects. I will propose that such "mixed selectivity" at the level of individual cells is a signature of high-dimensionality at the level of the population activity, whose computational role is consistent with PFC’s importance for cognitive flexibility. This hypothesis is supported by computational models that reveal an impressive advantage of high-dimensional activity patterns over specialized low-dimensional representations in terms of the larger repertoire of downstream response functions that they can accommodate. Moreover, I will show empirical evidence that the dimensionality of the neural activity patterns is predictive of animal behavior, as it collapses in error trials in a working memory task. Having established the importance of high-dimensional neural representations, I will propose a noninvasive method that exploits repetition suppression to estimate the dimensionality of neural activity patterns, even in the case where response properties are highly heterogeneous and not anatomically organized.

23

Invited Talks

24

.

Contributed Talks

25

Contributed Talks

[C 1] Evaluating Stroke Recovery by Structural Decomposition of Motor Kinematics Borislav Antic1 , Uta Büchler1 , Anna-Sophia Wahl2 , Martin Ernst Schwab2 , Björn Ommer1 1. HCI/IWR, Heidelberg University, Speyerer Straße 6, D-69115 Heidelberg, Germany 2. Brain Research Institute, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland doi: 10.12751/nncn.bc2015.0012

Stroke is a major cause of adult disability and rehabilitative training is the prevailing approach to enhance motor recovery. However, the way rehabilitation helps to restore lost motor functions by continuously reshaping kinematics is still an open research question. We follow the established setup of a rat model before/after stroke in the motor cortex to analyze the subtle changes in hand motor function solely based on video. Since nuances of paw articulation are crucial, mere tracking and trajectory analysis is insufficient. Thus, we propose a method for automatic structural decomposition and analysis of grasping action which bootstraps the set of exemplar classifiers trained on individual hand postures and selects the best hand pose representation using max-projection. A measure of grasping similarity is obtained by a large ensemble of predictors that discriminate individual hand configurations from another. This overcomplete representation effectively captures the nuances of hand posture and its deformation over time. The classifiers leverage the idea of maximal margin from machine learning to not only quantify functional deficiencies, but also find specific defects in the grasping sequence via inverse projection. This may reveal fundamental neurophysiological principles of motor recovery after CNS injury to analyze the efficiency of distinct rehabilitative strategies. Our results support the causal relationship between distinct newly formed corticospinal circuitry and optimal rehabilitative schedules as a basis for almost full motor recovery [1]. Moreover, evaluation shows that our fully automatic analysis is reliable and more effective than the prevalent manual studies of the day.

Structural decomposition of motor kinetics using the proposed overcomplete representation of hand posture helps to distinguish impaired grasping patterns from healthy ones (left) and find abnormal postures within individual grasps (bottom right). References 1 Wahl AS et al. (2014). Asynchronous therapy restores motor control by rewiring of the rat corticospinal tract after stroke. Science 13;344 (6189):1250-1255. 10.1126/science.1253050

26

[C 2] Correlations and signatures of criticality in neural population models Marcel Nonnenmacher1,2,3 , Christian Behrens2,4,5 , Philipp Berens2,4,5,6,7 , Matthias Bethge1,2,4,5 , Jakob Macke1,2,3

.

1. Max Planck Institute for Biological Cybernetics, Tuebingen, Germany 2. Bernstein Center for Computational Neuroscience, Tuebingen, Germany 3. research center caesar, Bonn, Germany 4. Centre for Integrative Neuroscience, Tuebingen, Germany 5. Institute of Theoretical Physics, University of Tuebingen, Tuebingen, Germany 6. Baylor College of Medicine, Houston, Texas, USA 7. Institute of Opthalmic Research, Tuebingen, Germany doi: 10.12751/nncn.bc2015.0013

Large-scale recording methods make it possible to measure the statistics of neural population activity, and thereby to gain insights into the principles that govern the collective activity of neural ensembles. One hypothesis that has emerged from this approach is that neural populations are poised at a thermodynamic critical point [1], and that this may have important functional consequences. Support for this hypothesis has come from studies [2,3] that identified signatures of criticality (such as a divergence of the specific heat with population size) in the statistics of neural activity recorded from populations of retinal ganglion cells. What mechanisms can explain these observations? Do they require the neural system to be fine-tuned to be poised at the critical point, or do they robustly emerge in generic circuits [4,5,6]? We show that indicators for thermodynamic criticality arise in a simple simulation of retinal population activity, and without the need for fine-tuning or adaptation. Using simple statistical models [7], we demonstrate that peak specific heat grows with population size whenever the (average) correlation is independent of the number of neurons. The latter is always true when uniformly subsampling a large, correlated population. For weakly correlated populations, the rate of divergence of the specific heat is proportional to the correlation strength. This predicts that neural populations would be strongly correlated if they were optimized to maximize specific heat, which is in contrast with theories of efficient coding that make the opposite prediction. Our findings suggest that indicators for thermodynamic criticality might not require an optimized coding strategy, but rather arise as consequence of subsampling a stimulusdriven neural population.

27

Contributed Talks

a,b) In a simple simulation of the retina , specific heat diverges with population size (“evidence for criticality”). c,d) Divergence rates change with stimulus-induced correlations, in a manner which can be analytically predicted in simple models. Acknowledgements Work funded by the German Federal Ministry of Education and Research (BMBF; FKZ: 01GQ1002, Bernstein Center Tübingen) and the German Research Foundation (BE 5601/1-1 to PB) References 1 Yu, S., Yang, H., Shriki, O., & Plenz, D. (2013). Universal organization of resting brain activity at the thermodynamic critical point. Frontiers in systems neuroscience, 7. 10.3389/fnsys.2013.00042 2 Tkacik, G., Mora, T., Marre, O., Amodei, D., Berry, I. I., Michael, J., & Bialek, W. (2014). Thermodynamics for a network of neurons: Signatures of criticality. arXiv preprint arXiv:1407.5946. 3 Mora, T. (2014). Dynamical criticality in the collective activity of a population of retinal neurons. arXiv preprint arXiv:1410.6769. 4 Macke, J. H., Opper, M., & Bethge, M. (2011). Common input explains higher-order correlations and entropy in a simple model of neural population activity. Physical Review Letters, 106(20), 208102. 10.1103/PhysRevLett.106.208102 5 Schwab, D. J., Nemenman, I., & Mehta, P. (2014). Zipf’s law and criticality in multivariate data without fine-tuning. Physical review letters, 113(6), 068102. 10.1103/PhysRevLett.113.068102 6 Aitchison, L., Corradi, N., & Latham, P. E. (2014). Zipf’s law arises naturally in structured, high-dimensional data. arXiv preprint arXiv:1407.7135. 7 Tkačik, G., Marre, O., Mora, T., Amodei, D., Berry II, M. J., & Bialek, W. (2013). The simplest maximum entropy model for collective behavior in a neural network. Journal of Statistical Mechanics: Theory and Experiment, 2013(03), P03011. 10.1088/1742-5468/2013/03/P03011

[C 3] State and Parameter Estimation in Neurons of the Song Production Pathway in Zebra Finch Songbirds Nirag Kadakia1 , Henry Abarbanel1 1. Department of Physics, University of California San Diego doi: 10.12751/nncn.bc2015.0014

The brief, stereotypical songs produced by zebra finch songbirds have been studied in terms of auditory output and neural behavior. Neural firing patterns and connectivity have been studied in various regions of the songbird brain known as the HVC and RA, which are together responsible for producing the bird’s vocal output. While several archetypal features of neural firing and the auditory output have been measured, it is as yet unclear exactly how the peculiar firing patterns can be explained by appropriate combinations of cellular and network properties. In particular, HVC to RA projection neurons have been found to have sparse, short bursts during the song. On the other hand, HVC interneurons burst densely and broadly throughout the song, as do the RA neurons which receive excitation from the HVC and in turn connect to the vocal box itself. Recent data has suggested that the inhibitory effect of the interneurons may play 28

[C 4]

Optimal Networks for Integrating two Input Spaces

Christian Leibold1 1. Department Biology II, LMU Munich, Großhsdernerstr. 2, 82152 Martinsried, Germany doi: 10.12751/nncn.bc2015.0015

To bind different features of a stimulus into a unique neuronal representation, neurons have to integrate information from multiple input spaces and be able to reliably respond only if all features of the encoded stimulus are represented in the input and not respond if one of the features is missing. We study this problem for two input spaces. As an example, we consider a hippocampal cell that integrates space information (putatively from medial entorhinal cortex) and object identity (putatively from lateral entorhinal cortex). The neuron is supposed to fire only if a specific object is at a specific place, and not supposed to fire if a different object is at that place or the object is at another place. We treat this problem by an analytical approach in which we concurrently minimize the synaptic weight changes and the number of input neurons and thereby derive coding constraints and the numbers of neurons necessary in the individual input populations. Our model predicts that neurons with anatomically distinct input pathways should exhibit distinct forms of synaptic plasticity such that synapses relaying fewer input features should be more plastic than synapses from input networks representing many input features. Furthermore, networks that combine two input spaces should exhibit a sparse code. Finally, the output code should be highly invariant with respect to the input space with few features and more strongly varying for the input space with many features. All three predictions are consistent with properties of the hippocampal pace code.

29

.

a role in the shaping of individual song motifs, but the exact cellular mechanisms and network connectivity are largely conjectural. This work seeks to incorporate experimental data through a refined data assimilation technique to give insight into cellular properties. In this technique, a proposed model with unknown parameters and dynamical state variables is combined with measured data (presumably sparse) to determine the parameters of the system. The data assimilation method is defined as a path integral representation of transition probabilities, together defining the model trajectory, conditioned on the measurements. We have evaluated this path integral in several methods, using combinations of numerical procedures and variational approximations. It has been successful in many toy models, and here we extend the applications to neural data. We show that it can determine a host of linear and nonlinear parameters and unmeasured state variables in the neural model of the zebra finch HVC to excellent accuracy.

Contributed Talks

[C 5] Automatic discovery of brain states from multivariate LFPs during appetitive behaviour Álvaro Tejero-Cantero1,2 , Diego Vidaurre3 , Claire Bratley1 , Colin McNamara1 , Stéphanie Trouche1 , Mark Woolrich3 , David Dupret1 1. MRC Brain Network Dynamics Unit, University of Oxford, Mansfield Road, OX1 3TH Oxford, United Kingdom 2. Faculty of Biology Ludwig-Maximilians-Universität München, and Bernstein Center for Computational Neuroscience Munich, Großhadernerstr. 2, 82152 Planegg, Germany 3. Oxford Center for Human Brain Activity, Department of Psychiatry, University of Oxford, Warneford Hospital, OX3 7JX Oxford, United Kingdom doi: 10.12751/nncn.bc2015.0016

Electrical patterns of activity recorded from the brain correlate with sensory perception, internal communication or motor output. Current popular approaches to identify such patterns by their spectral fingerprint scale poorly as the signal becomes more multivariate. At the same time, techniques predicated on stationarity (e.g. Fourier-based) suffer an undesirable tradeoff: either the signal is divided in small windows, resulting in low frequency resolution, or if longer windows are used, the resulting time resolution is often too low to investigate behaviour or rapid brain communication. We combine the simplest nontrivial temporal structure in the latent brain states (Markovian dynamics) and in the multivariate observations (linear dependence of current samples on past history) in a Bayesian Hidden Markov / Multivariate Autoregressive Model (HMM-MAR, 1) that provides a segmentation into states with approximately linear and stationary dynamics. We apply the unsupervised HMM-MAR scheme to the classification and characterisation of mouse brain states using quintuple-area recordings during appetitive behavior. We characterise both well-known oscillatory states (2, 3, 4) and hitherto unknown directed interactions between brain areas, as well as the temporal structure of such states in relation to the behavioural phases.

Left, LFP (4 or 5 sites) recorded in the behaving mouse is segmented into brain states that bundle in metastates (5) related to behaviour. Right, grey state shows prominent PSD peak at gamma ( 80Hz) frequency, recurs regularly every 600ms (inset). Acknowledgements We thank Natalia Campo-Urriza and Yves Weissenberger. Work supported by the MRC UK (award MC_UU_12020/7) and Mid-Career Researchers Grant from the Medical Research Foundation (award C0443) to D.D.

30

[C 6] The Brain-Derived Neurotrophic Factor Val66Met Polymorphism Moderates Reconsolidation of Fear Memory in Humans Manish K Asthana1 , Andreas Mühlberger, Andreas Reif, Simone Schneider, Martin J Herrmann 1. Social and Cognitive Neuroscience, Mackenzie Presbyterian University, Sao Paulo, Brasil 2. Department of Clinical Psychology and Psychotherapy„ University of Regensburg, Regensburg, Germany 3. Department of Psychiatry, Psychosomatics and Psychotherapy, University of Frankfurt, Frankfurt, Germany 4. Department of Psychiatry, Psychosomatics and Psychotherapy, University of Würzburg, Würzburg, Germany 5. Department of Psychiatry, Psychosomatics and Psychotherapy, , University of Würzburg, Würzburg, Germany doi: 10.12751/nncn.bc2015.0017

Memory reconsolidation is the direct effect of memory reactivation followed by protein synthesis stabilization. It has been well documented that neural encoding of both newly and reactivated memories requires synaptic plasticity, further the change in synaptic plasticity is the due to several biological molecules. Recently, Brain-derived Neurotrophic Factor (BDNF) molecule has been extensively investigated when it comes to its role of synaptic plasticity in the formation, and in the alteration of pathological memories. However, its role on fear reconsolidation is still unclear; hence the current study has been designed to investigate the role of BDNF in fear memory reconsolidation in humans. An auditory fear-conditioning paradigm was conducted, in which 91 participants were randomly assigned into two groups. A day after fear conditioning, one group of participants underwent reactivation of fear memory followed by the extinction training (reminder group), whereas the other group (non-reminder group) did not receive memory reactivation. On day 3, both groups were subjected to spontaneous recovery. The threat-elicited defensive response due to conditioned threat was measured by assessing the skin conductance response (SCR) to the CS. BDNF polymorphism correlated robustly with the reconsolidation of fear memory. Analysis revealed two important findings: (i) We found an effect of reminder on the persistence of fear memory only in the Met allele carriers of the BDNF Val66Met polymorphism, with higher decline of differential (CS+ vs CS-) SCRs (re-extinction minus extinction) in the reminder group compared with the non-reminder group and (ii) For the reminder group a higher decline of differential (CS+ vs CS-) SCRSs mean (re-extinction minus extinction) in Met allele carriers of the BDNF Val66Met polymorphism compared to the Val allele carriers. These results indicate moderating effects of BDNF polymorphism in fear memory reconsolidation. 31

.

References 1 Cassidy, M. J. & Brown, P. Hidden Markov based autoregressive analysis of stationary and nonstationary electrophysiological signals for functional coupling studies. J. Neurosci. Methods 116, 35–53 (2002). 10.1016/S0165-0270(02)00026-2 2 Buzsáki, G. et al. Hippocampal network patterns of activity in the mouse. Neuroscience 116, 201–211 (2003). 10.1016/S0306-4522(02)00669-3 3 Shigeyoshi, F. & Buzsáki, G. A 4 Hz oscillation adaptively synchronizes prefrontal, VTA, and hippocampal activities. Neuron 72, 153–165 (2011). 10.1016/j.neuron.2011.08.018 4 Ito, J. et al. Whisker barrel cortex delta oscillations and gamma power in the awake mouse are linked to respiration. Nat. Commun. 5, 3572 (2014). 10.1038/ncomms4572 5 Rosvall, M., Axelsson, D. & Bergstrom, C. T. The map equation. Eur. Phys. J. Spec. Top. 178, 13–23 (2009). 10.1140/epjst/e2010-01179-1

Contributed Talks

32

.

Posters Tuesday

33

Posters Tuesday

Theory of neural computation [T 1] Reverse-engineering perceptual inference as a hierarchical interaction between spontaneously active cortical columns Robin CAO1,2 , Alexander Pastukhov1 , Maurizio Mattia, Jochen Braun1 1. Kognitionsbiologie, Otto v. Guericke Universitaet, Leipziger Str. 44, 39120 Magdeburg, Germany 2. Complex Systems, Italian Institute of Health, Via Regina Elena 299, 00161 Roma, Italy doi: 10.12751/nncn.bc2015.0018

Multi-stable perception offers a ready-made paradigm for investigatingperceptual inference. Sufficient empirical constraints are now available to reverse-engineer the underlying neural mechanisms in some detail. The “scalar property” of reversal time densities (stereotypical, Gamma-like shape, but widely different means) implicates a spontaneously active representation, in which mean and variance of collective activity share the same physical origin. A plausible neural realization are cortical columns with bistable activity dynamics. The complex dependence of reversal times on competing inputs (“Levelt’s propositions”) implicates a hierarchical representation with coupled decision populations and independent evidence populations. To account for empirical evidence, the former must operate as a differential threshold for the latter, and the latter must accumulate or dissipate evidence gradually. To obtain an exploratory dynamics, the dominant decision population must also suppress (habituate) its supporting evidence. The model is fully constrained by perceptual evidence and makes several testable predictions at the perceptual and neural level, including non-stationary features of reversal sequences (‘runs’ of short or long dominance periods) and characteristic shifts in the power spectrum of collective neural activity during reversals (from low to high frequency power). We conclude that the mechanisms underlying perceptual inference can now be reconstructed in considerable detail.

Acknowledgements The authors were supported by the BMBF Bernstein Network of Computational Neuroscience and the State of Saxony-Anhalt. References 1 Stochastic Accumulation by Cortical Columns May Explain the Scalar Property of Multistable Perception, R. Cao, J. Braun, M. Mattia http://dx.doi.org/10.1103/PhysRevLett.113.098103 2 Bistable Perception Modeled as Competing Stochastic Integrations at Two Levels Guido Gigante , Maurizio Mattia, Jochen Braun, Paolo Del Giudice 10.1371/journal.pcbi.1000430 3 Multistability and perceptual inference. Gershman SJ, Vul E, Tenenbaum JB. 10.1162/NECO_a_00226 4 Predictive coding explains binocular rivalry: an epistemological review. Hohwy J, Roepstorff A, Friston K. 10.1016/j.cognition.2008.05.010

34

[T 2] Conductance-based refractory density model of a neuronal statistical ensemble Anton Chizhov1

The conductance-based refractory density (CBRD) approach has been developed [1,2] for a population of adaptive and non-adaptive Hogkin-Huxley-like neurons as a statistical ensemble of similar neurons receiving a common input and individual white or color gaussian noise. Neurons of such population constitute a 1-d continuum in the phase space of the time elapsed since their last spikes. The Hodgkin-Huxley-like equations are parametrized in the phase space. Evolution of the neuronal density determines the population firing rate dynamics. The key element of the CBRD approach is a hazard function of neuronal spiking probability, derived to be universal for a wide range of basic neuron models. The CBRD model is quite precise in comparison with MonteCarlo simulations, available analytical steady-state solutions for leaky integrate-and-fire neurons and experimental multi-trial recordings in a single neuron. Basing on CBRD, a complex model of a cortical network has been constructed and compared with known experimental intracellular and optical recordings in the primary visual cortex [3]. Acknowledgements The work has been supported by the Russian Foundation for Basic Research with the grant 15-04-06234a References 1 Chizhov AV, Graham LJ: Efficient evaluation of neuron populations receiving colored-noise current based on a refractory density method. Phys Rev E 2008, 77: 011910 2 Chizhov AV, Graham LJ: Population model of hippocampal pyramidal neurons, linking a refractory density approach to conductance-based neurons. Phys Rev E 2007, 75: 011924. 3 19. A.V.Chizhov. Conductance-based refractory density model of primary visual cortex. // J. Comp. Neuroscience, v.36(2), 297-319, 2014. DOI 10.1007/s10827-013-0473-5

[T 3] Function-Structure Relationships in a Three-State Cellular Automaton Model of Excitable Neuronal Networks Koray Ciftci1 1. Biomedical Engineering Department, Namık Kemal University, Turkey doi: 10.12751/nncn.bc2015.0020

The relationship between the anatomical architecture and functional dynamics of the brain is of prime importance for the understanding of the information processing in neural systems [1]. In this study, a simple three-state (susceptible, excited and refractory) cellular automaton model, [2], was used to explore this relationship. A synaptic failure probability component was also added to the original model. This model was used to describe the activity of a population of neurons organized into some stereotypical networks (random, small-world, scale-free, hierarchical). Mean-field analysis was used to explore the dependence of the network activity on the model parameters. The research question was 3-fold: How does the network activity depend on the model probabilities? What is the effect of synaptic failures on the structure-function correspondence? What is the relationship between a node’s graph metrics and its activity profile? The simulations and mean field analysis showed that a certain amount of synaptic failure increases the correlation between the degree centrality of a node and its average activation level. This observation led to the conjecture that a deterministic information flow within 35

.

1. Ioffe Physical-Technical Institute of RAS, Saint-Petersburg, Russia doi: 10.12751/nncn.bc2015.0019

Posters Tuesday the network obscures the topological properties and degrades the function-structure correspondence. References 1 Honey, C. J., Thivierge, J. P., & Sporns, O. (2010). Can structure predict function in the human brain?. Neuroimage, 52(3), 766-776. 2 Hütt, M. T., Kaiser, M., & Hilgetag, C. C. (2014). Perspective: network-guided pattern formation of neural dynamics. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1653), 20130522.

[T 4] Probabilistic computing based on noise generated by deterministic neural networks Jakob Jordan1 , Tom Tetzlaff1 , Mihai Petrovici2 , Oliver Breitwieser2 , Ilja Bytschok2 , Johannes Bill3 , Johannes Schemmel2 , Karlheinz Meier2 , Markus Diesmann1 1. INM-6 & IAS-6 & JARA BRAIN Institute I, Juelich Research Centre, Juelich, Germany 2. Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany 3. Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria doi: 10.12751/nncn.bc2015.0021

Neural-network models of brain function often rely on the presence of noise [1-5]. To date, the interplay of microscopic noise sources and network function is only poorly understood. In computer simulations and in neuromorphic hardware [6-8], the number of noise sources (random-number generators) is limited. In consequence, neurons in large functional network models have to share noise sources and therefore receive correlated input. In general, it is unclear how shared-noise correlations affect the performance of functional network models, for example in pattern classification tasks. In addition it remains an open question how a limited number of noise sources can supply large functional networks with uncorrelated noise. Here, we investigate the performance of neural Boltzmann machines [2-4]. We first demonstrate that correlations in the background noise impair the sampling performance: deviations from the target distribution scale inversely with the number of noise sources. Further, we demonstrate that a recurrent network of deterministic elements can replace a finite ensemble of independent noise sources. As shown recently, inhibitory feedback, abundant in biological neural networks, serves as a powerful decorrelation mechanism [9,10]: shared-noise correlations are actively suppressed by the network dynamics. By exploiting this effect, the functional network performance is significantly improved. We conclude that recurrent neural networks can serve as natural finite-size noise sources for functional neural networks, both in biological and in synthetic neuromorphic substrates. Finally, we show that neural Boltzmann machines with sufficiently strong negative feedback can intrinsically suppress correlations in the background activity, and thereby improve their performance substantially. Acknowledgements SMHB, JARA, EU Grant 269921 (BrainScaleS), FWF #I753-N23 (PNEUMA), The Manfred Stärk Foundation, and EU Grant 604102 (HBP).

36

[T 5] Spike pattern transformation learning in structured spiking neural networks Brian Gardner1 , André Grüning1 1. Department of Computing, University of Surrey, Guildford, Surrey, United Kingdom doi: 10.12751/nncn.bc2015.0022

Few learning rules exist for spiking networks that are as technically efficient and versatile in their deployment as backpropagation is for rate-coded networks, and yet take advantage of a fully temporal coding scheme. Recently proposed multilayer learning rules [1,2] have aimed to address this shortcoming, but have not been generalised in their formulation. Here we propose a general methodology for training layered networks of spiking networks to perform temporally-precise spike pattern transformations. In our analysis we can consider any suitable error function for measuring the dissimilarity between an actual and desired output spike pattern, upon which gradient descent is taken in combination with backpropagation to derive weight updates for each layer. Our method is neuron model independent and applicable to increasingly complex network structures consisting of several layers. The technical performance of our learning method is tested through simulations of multilayer networks of LIF neurons, and is demonstrated to be capable of solving the linearly non-separable Exclusive-OR (XOR) computation on a temporally coded basis. Furthermore, the proposed learning method is indicated to give a high performance level when performing arbitrary input-output spike pattern transformations from inputs of low dimensionality. An example of a network trained to map between an input and target spatio-temporal output pattern is shown in Fig. 1. Interestingly, single-layer learning rules derivable from our method share functional similarities with both ReSuMe [3] and SPAN [4]; however, while ReSuMe and SPAN are heuristically derived from the Widrow-Hoff learning rule, our approach instead takes an error function as its starting point for increased analytical rigour. 37

.

References 1 Rolls ET, Deco G: The noisy brain. Oxford University Press, 2010 2 Hinton GE, Sejnowski TJ, Ackley DH: Boltzmann machines: constraint satisfaction networks that learn. Technical report, Carnegie-Mellon University, 1984 3 Buesing L, Bill J, Nessler B, Maass W: Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons. PloS CB, 2011, 7, e1002211. 4 Petrovici MA, Bill J, Bytschok I, Schemmel J, Meier K: Stochastic inference with deterministic spiking neurons. arXiv, 2013, 1311.3211v1 [q-bio.NC] 5 Probst D, Petrovici MA, Bytschok I, Bill J, Pecevski D, Schemmel J, Meier K: Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons. Front. Comput. Neurosci., 2015, 9:13. 6 Schemmel J, Bruederle D, Gruebl A, Hock M, Meier K, Millner S: A Wafer-Scale Neuromorphic Hardware System for Large-Scale Neural Modeling. Proceedings of the 2010 International Symposium on Circuits and Systems (ISCAS), IEEE Press, 2010, 1947-1950 7 Bruederle D, Petrovici M, Vogginger B, Ehrlich M, Pfeil T, Millner S, Gruebl A, Wendt K, Mueller E, Schwartz MO et al.: A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems. Biological Cyberne 8 Petrovici MA, Vogginger B, Mueller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schueffny R et al.: Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms. PLoS ONE, 2014, 9 Renart A, De La Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD: The asynchronous State in Cortical Circuits. Science, 2010, 327: 587-590 10 Tetzlaff T, Helias M, Einevoll G, Diesmann M: Decorrelation of neural-network activity by inhibitory feedback. PloS CB, 2012, 8, e1002596

Posters Tuesday

Example of a feed-forward multilayer spiking neural network learning a single pattern transformation. (A) Input pattern, and final hidden and output spike rasters. (B) Evolution of output layer spike raster and network error with learning iterations. Acknowledgements This work was supported by the EPSRC (grant no. EP/J500562/1) and the European Community’s Seventh Framework Programme (FP7/2007-2013, grant no. 604102 (HBP) - the Human Brain Project). References 1 Sporea, I., & Grüning, A. (2013). Supervised learning in multilayer spiking neural networks. Neural Computation, 25(2), 473-509. 2 Gardner, B., Sporea, I., & Grüning, A. (2015). Encoding spike patterns in multilayer spiking neural networks. arXiv preprint arXiv:1503.09129. 3 Ponulak, F., & Kasinski, A. (2010). Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Computation, 22(2), 467-510. 4 Mohemmed, A., Schliebs, S., Matsuda, S., & Kasabov, N. (2012). SPAN: Spike Pattern Association Neuron for learning spatio-temporal spike patterns. International Journal of Neural Systems, 22(04).

[T 6] Mechanisms determining the parallel performance of spiking neural simulators Helge Ülo Dinkelbach1 , Julien Vitay1 , Fred H. Hamker1 1. Department of Computer Science, Artificial Intelligence, Chemnitz University of Technology, Germany doi: 10.12751/nncn.bc2015.0023

The complexity and size of neural networks used in the computational neuroscience community has increased rapidly in the recent years. The need for an efficient parallel evaluation of these networks is also linked to the development of massively parallel hardware, such as multi-core systems (CPU) or graphical processing units (GPU). A number of research groups have focused on developing parallel algorithms and simulators best suited for the spiking neural models on CPUs, e. g. Brian2, NEST or Auryn, or on GPUs e.g. GeNN, CARLSim or NeMo. We developed the neural simulator ANNarchy (Artificial Neural Networks architect) [1] with the following objectives: 1) a flexible equation-oriented interface in Python similar to Brian; 2) the ability to simulate ratecoded, spiking or hybrid neural networks; 3) a full code generation approach allowing to adapt memory structures and elementary computations to the desired network and 4) a parallel performance on CPUs and GPUs at least comparable to existing simulators. We provide here a comparison of the parallel performance of ANNarchy with other simulators (Brian2, Nest, Auryn and GENN) on different benchmarks, such as the COBA network [2, 3] or the Brunel network [4], with or without synaptic plasticity. We show that the choice of specific data structures and computational algorithms for a given network 38

influences the parallel performance. This highlights the need for network-specific code generation, as implemented in ANNarchy.

.

References 1 [1] Vitay, J., Dinkelbach, H. Ü. and Hamker, F.H. 2015. ANNarchy: a code generation approach to neural simulations on parallel hardware (submitted to Frontiers in Neuroinformatics) 2 [2] Vogels, T. P. and Abbott, L. F. (2005), Signal propagation and logic gating in networks of integrate-and-fire neurons. J. Neurosci., 25, 46, 10786–95 10.1523/JNEUROSCI.3508-05.2005 3 [3] Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., et al. (2007), Simulation of networks of spiking neurons: a review of tools and strategies., Journal of Computational Neuroscience, 23, 3, 349–98 10.1007/s10827-007-0038-6 4 [4] Brunel, N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Computational Neuroscience, 8:183-208

[T 7]

Are cascade models suitable descriptions of dendritic function?

Stefan Häusler1 , Andreas Bartels1 , Christoph Kirst2 , Martin B. Stemmler1 , Andreas V.M. Herz1 1. Bernstein Center for Computational Neuroscience and LMU Munich 2. Rockefeller University, New York doi: 10.12751/nncn.bc2015.0024

The dendrites of many neuron types exhibit complex morphologies and spatially modulated distributions of ionic channels (1), which imply that the dendritic integration of synaptic input is non-trivial. Dendrites might implement linear-nonlinear cascades, with a linear filter acting on the input followed by a static nonlinearity at each step (2, 3), yet this mathematical abstraction of dendritic function is subject to debate. Using simulations, we study dendritic integration in the apical dendrites of pyramidal cells in the presence of active ionic conductances and show how one can identify nonlinear computations that go beyond the linear-nonlinear cascade model. The method relies on varying the inputs such that a chosen output measure stays constant (4). The corresponding set of inputs then defines a lower-dimensional “iso-response manifold” of the neuron under study. We show that all nonlinearities in a static hierarchical cascade can be identified from a few iso-response manifolds under the assumption that the nonlinearities act on the weighted sum of their inputs. We apply this approach to a detailed multi-compartment model of a pyramidal cell (5) and show that the static hierarchical cascade model fails to describe the somatic membrane potential response to synaptic inputs to pairs of branches in the dendritic tuft. As synaptic inputs can elicit dendritic spikes, we extend the cascade model to incorporate nonlinear feedback from subsequent processing stages and demonstrate that this type of model is sufficient to predict the membrane potential responses at the soma. Both the functional form and the range of the nonlinear feedback can be accurately identified by iso-response methods. These results provide a step towards the next-generation of single-neuron models that capture key features of dendritic integration. The iso-response technique to distill these features can be directly applied to experiments based on photostimulation or multi-electrode recordings.

39

Posters Tuesday References 1 Major, G., Larkum, M. E., & Schiller, J. (2013). Active properties of neocortical pyramidal neuron dendrites. Annual review of neuroscience, 36, 1-24. 2 Häusser, M., & Mel, B. (2003). Dendrites: bug or feature?. Current opinion in neurobiology, 13(3), 372-383. 3 Larkum, M. E., Nevian, T., Sandler, M., Polsky, A., & Schiller, J. (2009). Synaptic integration in tuft dendrites of layer 5 pyramidal neurons: a new unifying principle. Science, 325(5941), 756-760. 4 Gollisch, T., & Herz, A. V. (2012). The iso-response method: measuring neuronal stimulus integration with closed-loop experiments. Frontiers in neural circuits, 6. 5 Poirazi, P., Brannon, T., & Mel, B. W. (2003). Pyramidal neuron as two-layer neural network. Neuron, 37(6), 989-999.

[T 8] Point neuron or not? A method to infer functionally relevant dendritic compartments. Stefan Häusler1 , Andreas V.M. Herz1 1. Bernstein Center for Computational Neuroscience and LMU Munich, Germany doi: 10.12751/nncn.bc2015.0025

Dendrites of many types of neurons are capable of all-or-none electrogenesis, which results in a spatial compartmentalization of voltage signals (1). This observation has led to the proposal that dendrites act as independent computational subunits within a multi-layered processing scheme (2, 3), the depth and structure of which presumably depend on the distribution of synaptic inputs (4). A key challenge has been to identify the most relevant computational subunits for the functional description of a neuron, with point neuron models on one side of the spectrum of model complexity and detailed multi-compartment models on the other. We address this question by providing a theoretical framework to quantitatively assess how likely it is that an arbitrary input/output relationship defined by a neuron is consistent with static hierarchical cascade models. More precisely, we show that hierarchies within static cascades are uniquely reflected in symmetries of the functions implemented by the cascades. Secondly, we demonstrate that the probability that a symmetry is present in noisy input/output samples can be estimated by Bayesian inference given a prior distribution over potential cascade models. We apply this framework to a detailed multi-compartment model of a CA1 pyramidal cell (5) and investigate the dependence of the somatic sub-threshold membrane potential, as a measure of the cell’s output, on multiple synaptic Poisson inputs located on different dendritic branches. We show that the number and the configuration of the most likely computational subunits depend not only on the distribution of synaptic inputs, but also on the time scale used to analyze the neuron’s function. The proposed method can by applied to experimental techniques like focal laser-activated release of caged glutamate to provide evidence for the appropriate functional description of a neuron despite the difficulty to record from many of its fine dendritic branches. References 1 Major, G., Larkum, M. E., & Schiller, J. (2013). Active properties of neocortical pyramidal neuron dendrites. Annual review of neuroscience, 36, 1-24. 2 Häusser, M., & Mel, B. (2003). Dendrites: bug or feature?. Current opinion in neurobiology, 13(3), 372-383. 3 Larkum, M. E., Nevian, T., Sandler, M., Polsky, A., & Schiller, J. (2009). Synaptic integration in tuft dendrites of layer 5 pyramidal neurons: a new unifying principle. Science, 325(5941), 756-760. 4 Polsky, A., Mel, B. W., & Schiller, J. (2004). Computational subunits in thin dendrites of pyramidal cells. Nature neuroscience, 7(6), 621-627. 5 Poirazi, P., Brannon, T., & Mel, B. W. (2003). Pyramidal neuron as two-layer neural network. Neuron, 37(6), 989-999.

40

[T 9] The impact of depression on the information rate of the two-channel model of release site Mehrdad Salmasi1,2 , Martin Stemmler3,4 , Stefan Glasauer1,2,3,5 , Alex Loebel3,4

We evaluate the impact of synaptic depression on the amount of information that a single release site transfers and prove that depression can enhance the information efficacy of the release site. Synaptic information efficacy has been studied in different neuronal models [1-3], however the influence of the synaptic dynamics on the information transmission is not yet completely known. We use binary asymmetric channels to model depression in a single release site (Fig. 1a). The channel input, Xi , is the input spike process, and the channel output, Yi , is the release outcome of the release site. In the case of a release, the channel goes to the “used” state, and without a release, the channel is switched back to the “recovered” state. We have previously derived the mutual information rate of the two-channel model of depression [4]. Here we expand on this result and study the effect of depression on the information rate of the release site analytically. In particular, we consider the energy consumption of the neuron and calculate the mutual information rate divided by the energy that is consumed in each vesicle release. The results of this analysis are summarized as follows: Result 1: If spontaneous release and spike-induced release are similarly depressed, then the information rate at the release site decreases. Result 2: Let c and d measure the fractions by which spontaneous release and spike-induced release are depressed respectively. If d is larger than a threshold d0 > c, then depression increases the mutual information rate and energy-normalized information rate of the release site. In Fig. 1b, we vary the values of two parameters of the model, d and q, while keeping the rest of the parameters fixed (c = 0.5, p = 0.1). We show in red the values for which depression increases the mutual information rate of the release site. Also in Fig. 1c, we show the result of similar analysis for energy-normalized information rate.

41

.

1. Graduate School of Systemic Neurosciences, Ludwig-Maximilian University, Munich, Germany 2. German Center for Vertigo and Balance Disorders, Ludwig-Maximilian University, Munich, Germany 3. Bernstein Center for Computational Neuroscience, Munich, Germany 4. Department of Biology II, Ludwig-Maximilian University, Munich, Germany 5. Department of Neurology, Ludwig-Maximilian University, Munich, Germany doi: 10.12751/nncn.bc2015.0026

Posters Tuesday

(a) The two-channel model of release site under depression. (b) The locus of parameters, d and q, for which the mutual information rate is higher under depression (red region). (c) The same as (b) for energy-normalized information rate. Acknowledgements This work was supported by the BMBF grant 01EO1401 (German Center for Vertigo and Balance Disorders). References 1 London M, Schreibman A, Häusser M, Larkum ME, Segev I: The information efficacy of a synapse. Nature Neuroscience 2002, 5(4), 332-340. 2 Fuhrmann G, Segev I, Markram H, Tsodyks M: Coding of temporal information by activity-dependent synapses. Journal of Neurophysiology 2002, 87(1), 140-148. 3 Goldman MS: Enhancement of information transmission efficiency by synaptic failures. Neural Computation 2004, 16(6), 1137-1162. 4 Salmasi M, Stemmler M, Glasauer S, Loebel A: Information-theoretic analysis of a dynamic release site using a two-channel model of depression. To be presented in Computational Neuroscience Meeting, CNS 2015, 18-23 July, Prague.

[T 10]

Critical connectivity, Anticipation and Memory interference

Norbert Michael Mayer1 1. Dept. of Electrical Engineering & AIM-HI, Nat’l Chung Cheng University, 168 University Road, Min-Hsiung, Chiayi, Taiwan doi: 10.12751/nncn.bc2015.0027

The aim of this contribution is to show concepts related to biological short term memory storage and retrieval. Their potential role is illustrated by means of a critical, input anticipating reservoir model[1], a special type of echo state network (ESN)[2]. Relevancy of ESNs to the brain has been shown experimentally[3]. Key idea of the present model is that for a network with a constant finite capacity, slower than exponential forgetting is only possible if predictable information and also information that is not useful for further predictions in the future is filtered away before it can enter the network which represents the anticipation part within my model. It can be understood as in flow data compression. Complementary, the network has to be tuned to critical connectivity of the recurrent layer, which has been proposed as optimal tuning for reservoir computing (’edge of chaos’ [4]) as well as it has been found in the brain[5]; also models exist that show how critical connectivity may be realized in biological neurons[6]. In the present 42

.

model, the critical point can be tuned in exactly. Results from the above mentioned model refer to interference theory[7], if one defines sensory input that is (a) unpredicted and (b) relevant for future predictions as an event. The present model is capable of forgetting single events in a power law fashion which means that traces of events stay in the network for very long time spans. Also, childhood amnesia[8] can be interpreted in the context of the model. An infant has early experiences that are not stored as reproducible memories in later life. Rather, it is plausible to assume that these early childhood experiences form a framework of expectations of usual daily events. Only the deviations from these expectations are stored as recallable memories at later stages of life. Thus, the time of early childhood amnesia can be seen as critical phase for developing a system of biological in flow memory compression.

Log-log plot shows memory decay of an unexpected event in a trained network. If expected input follows, decay of memory of the single event is a power law (green); if unexpected identical input follows one finds exponential decay (blue)[1]. Acknowledgements Fundings by the Ministry of Science and Technology (MOST) and the National Science Council (NSC) of Taiwan PID 103-2221-E-194 -039, 102-2221-E-194 -050 are thankfully acknowledged. References 1 Mayer, NM: Input anticipating critical reservoirs show power law forgetting of unexpected input events. 2015, Neural Computation, May 17(5) 10.1162/NECO_a_00730 2 Jäger, H: The echo state approach to analysing and training recurrent neural networks with an erratum note. In GMD report 148, GMD 2010 3 Nikolić, D, Häusler S, Singer W & Maass W: Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex. PLoS Biol 2009, 7(12), e1000260 10.1371/journal.pbio.1000260 4 Boedecker, J, Obst, O, Lizier, J, Mayer, NM & Asada, M: Information processing in echo state networks at the edge of chaos. Theory in Biosciences 2012, 131, 205-213. 10.1007/s12064-011-0146-8 5 Beggs, J & Plenz, D: Neuronal Avalanches Are Diverse and Precise Activity Patterns That Are Stable for Many Hours in Cortical Slice Cultures. J. Neurosci. 2004, 24(22): 5216-5229. 10.1523/JNEUROSCI.054004.2004 6 Levina, A, Herrmann, JM & Geisel, T: Dynamical synapses causing self-organized criticality in neural networks. Nature physics 2007, 3(12) 10.1186/1471-2202-12-S1-P118 7 Roediger, HL III & Schmidt, SR: Output interference in the recall of categorized and paired associative lists. Journal of Experimental Psychology: Human Learning and Memory, 1980, 6, 91-105. 8 Herbert, JS, & Pascalis, O: Memory development. In Slator, A & Lewis, M(eds.), Introduction to infant development, Oxford University Press; 2007.

43

Posters Tuesday

[T 11] Adaptation to stimulus statistics links spontaneous activity to the structure of orientation maps in V1 Mihály Bányai1 , Gergő Orbán1 1. Computational Systems Neuroscience Lab, Wigner Research Centre for Physics, Budapest, Hungary doi: 10.12751/nncn.bc2015.0028

Perception can be understood as unconscious inference of the features underlying sensory stimuli. Efficient perception requires a generative model that is adapted to the experience of the animal, i.e. to the statistics of sensory stimuli [1]. Traditional approaches of learning concern the adaptation of mean responses of sensory neurons and therefore aim to predict changes in marginal response statistics [2]. However, recent studies have highlighted that adaptation shapes not only the mean of the responses but also higher order statistics so that the distribution of stimulus-induced and that of spontaneous activity patterns are matched [3]. In order to understand how this is achieved, we need to establish the computational principles how the higher-order structure of the responses of neural populations is learned. Experimental characterisation of the second order statistics of neural activity is assessed by measuring various forms of correlations in the activity of pairs of neurons: signal, noise and spontaneous correlations. By exploiting the links that are established by probabilistic inference between these quantities, we show how spontaneous correlations translate into signal correlations. We use a probabilistic image model [4] to show that if a synaptic structure gives rise to spontaneous correlations, then without further constraints it implies a structure over the receptive fields of V1 simple cells similar to orientation maps. We show that the initial structure of the network, if it reflects the properties of prior expectations on feature covariances is beneficial for inference, and vice versa, it is detrimental if the activity distribution emerging from synaptic structure does not reflect the structure implied by the prior distribution. These results predict specific changes in the correlation structure of activity distributions and orientation maps during development. Acknowledgements This work was supported by an MTA Lendület fellowship. References 1 Barlow, H. B. (1990). Conditions for versatile learning, Helmholtz’s unconscious inference, and the task of perception. Vision Research, 30(11), 1561–1571. 2 Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607–609. 3 Berkes, P., Orbán, G., Lengyel, M., & Fiser, J. (2011). Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.Science, 331(6013), 83-87. 4 Schwartz, O., & Simoncelli, E. P. (2001). Natural signal statistics and sensory gain control. Nature neuroscience, 4(8), 819-825.

44

[T 12]

Reliability of stochastic nerve axon equations

Wilhelm Stannat1,2 , Martin Sauer1

Noise is an inherent component of neural systems that accounts for various problems in information processing at all levels of the nervous system. It has been observed that signal transmission using action potentials (APs) through axons may be unreliable due to channel noise, especially in thin axons. Noise can add or delete APs and also causes time jitter in the respective spike times. This work concentrates on the former two effects and we introduce a method for computing probabilities for the addition and deletion of APs in general spatially extended, conductance-based neuronal models subject to noise, based on statistical properties of the membrane potential. We compare different estimators with respect to the quality of detection, computational costs and robustness and propose the integral of the membrane potential along the axon as an appropriate estimator to detect both effects. This is illustrated with numerical results for the spatially extended Hodgkin-Huxley and the simpler FitzHugh-Nagumo model. In particular, we conclude that depending on the choice of the parameters of a given model, above all for the sodium channel, either AP addition or AP deletion is the pronounced effect. Performing a model reduction we achieve a simplified analytical expression based on the linearization at the resting potential (resp. the traveling action potential). This allows to approximate the probabilities AP addition and deletion in terms of (classical) hitting probabilities of one-dimensional linear stochastic differential equations. The quality of the approximation with respect to the noise amplitude is discussed using both neuron models above. Acknowledgements This work is supported by the BMBF, FKZ 01GQ1001B. References 1 M. Sauer, W. Stannat, Reliability of signal transmission in stochastic nerve axon equations, arXiv:1502.04295

[T 13] Phase transition in finite-size corrections of neural populations with spatial interaction Eric Lucon1 , Wilhelm Stannat2 1. Universite Paris Descartes, 45 rue des Saints-Peres, 75270 Paris Cedex 06, France 2. TU Berlin/BCCN Berlin, Str. des 17. Juni 136, 10623 Berlin/Philippstr. 13, 10115 Berlin, Germany doi: 10.12751/nncn.bc2015.0030

We present recent results on the asymptotic behaviour of large systems of stochastic conductance-based neuronal oscillators with P-nearest neighbor interaction or with polynomially decaying interaction, modelling the activity of spatially extended neural populations. The dynamics is considered within a random environment taking into account various kinds of inhomogeneities and uncertainties for each neuron. The associated continuous neural field equation, describing the asymptotic statistical properties of the whole network, has been obtained in the previous work [1]. In the 45

.

1. Institut für Mathematik, TU Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany 2. BCCN Berlin, Philippstr. 13, 10115 Berlin, Germany doi: 10.12751/nncn.bc2015.0029

Posters Tuesday present work [2] we establish in a mathematical rigorous way a class of stochastic evolution equations describing finite-size corrections. The main novelty of our work consists of the fact, that in the case of polynomial decay in the spatial interaction, these finite size corrections reveal a phase transition from Gaussian towards non-Gaussian fluctuations. Acknowledgements This work is supported by the BMBF (grant no. 01GQ1001B), Project A11. References 1 E. Lucon, W. Stannat: Mean field limit for disordered diffusions with singular interactions, Ann. Appl. Probab. , 24 (5):1946-1993, 2014 doi:10.1214/13-AAP968 2 E. Lucon, W. Stannat: Transition from Gaussian to non-Gaussian fluctuations for mean-field diffusions in spatial interaction, arXiv:1502.00532, 2015

Neural dynamics [T 14] Automatic sleep-wake transition in a thalamocortical network coupled with circadian rhythm centers Abolfazl Alipour1,2 1. Conscioustronics Foundation, Janbazan Blvd, Shiraz, Iran 2. Shiraz University of Medical Sciences, Zand Blvd, Shiraz, Iran doi: 10.12751/nncn.bc2015.0031

Sleep is one of the fundamental and basic processes in the brain and transition between sleep-wake states is probably the most important transition that heppens in the brain. Sleep-wake transition is similar with anesthesia since both include loss of consciousness. Previous models have shown that modulation of GABAa receptors in thalamocortical network by anesthetic drugs can induce alpha rhythm associated anesthesia [1]. On the other hand, other models proposed that increase in potassium leak currents can trigger transition from wakefulness to sleep [2]. In the present work, a more physiologically plausible mechanism for sleep-wake transition is demonstrated. Since wakefulness is mediated through excitatory inputs from ascending reticular activating system (ARAS), analysing the relationship between ARAS and thalamocortical system is an important factor in sleep-wake transition. To this end, it is shown that both hight frequency activity and slow wave oscillations can be induced in thalamocortical network by modulation of excitatory inputs coming from ARAS. Moreover, use of suprachiasmatic nucleus neurons to control the activity of the ARAS leads to automatic cycles of transition between sleep and wake states in the thalamocortical network. Acknowledgements I acknowledge the help of M. Mahdavi during the early stages of this work and A. Montakhab for his helpful comments. References 1 Ching, ShiNung, et al. "Thalamocortical model for a propofol-induced α-rhythm associated with loss of consciousness." Proceedings of the National Academy of Sciences 107.52 (2010): 22665-22670. 10.1073/pnas.1017069108 2 Hill, Sean, and Giulio Tononi. "Modeling sleep and wakefulness in the thalamocortical system." Journal of neurophysiology 93.3 (2005): 1671-1698. 10.1152/jn.00915.2004

46

[T 15] Modelling long term dynamics of neural excitability by a dynamical timescale model Tie XU, Omri Barak

Following the dynamics of single neuron excitability reveals non Poissonian properties that depend on activation history[1]. Experimental results over many hours have demonstrated that this dependency spreads across multiple timescales, suggesting a complex underlying biophysical process. Here we propose that the dynamics of excitability in isolated neurons can be understood by a standard neural adaptation model coupled with a dynamical adaptation timescale. Using Bayesian inference, we extract the adaptation timescale from the data. It appears to vary in time dynamically, affected by neural excitability. This is in contrast to the standard fixed timescale adaptation model. We propose a modified Langevin dynamics for the timescale, encapsulating well the dynamical adaptation timescale shown by data. This model gives better fitting score than the standard adaptation model and is much simpler than a parallel structure that includes different timescales by inserting a group of slow variables. This model captures multi-timescales dynamics in a compact way, predicting the neural excitability dynamics under different stimuli. Previous works on adaptation mechanism in single neuron or synapse have shown qualitative effect on network. The relative simple structure of our model permits further exploration on network composed of neurons with dynamical adaptation timescale.

Fitting results of neural firing rate under pulse train with different inter-pulses interval distribution(left panel). The neuron is probed 10 times under each input and the firing rate is averaged in 1s. Model predicts well excitability dynamics. Acknowledgements We thank Daniel Soudry for assistance in Bayesian estimation. OB is supported by ERC FP7 CIG 2013-618543 and by Fondation Adelis. References 1 Gal, Asaf, et al. "Dynamics of excitability over extended timescales in cultured cortical neurons." The Journal of Neuroscience 30.48 (2010): 16332-16342. 10.1523/JNEUROSCI.4859-10.2010

47

.

1. Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa, 3200003, Israel doi: 10.12751/nncn.bc2015.0032

Posters Tuesday

[T 16] A dynamic threshold equation on leaky-integrate-and-fire neurons does not qualitatively reproduce the dynamics of action potential generation Lukas Sonnenberg1 , Jan Benda1 1. Neuroethology Lab, Institute for Neurobiology, Eberhardt Karls Universität Tübingen doi: 10.12751/nncn.bc2015.0033

Excitable neurons generate action potentials whenever the membrane potential exceeds the current firing threshold. In general a neuron’s firing threshold is not at a fixed membrane potential but depends on the state of sodium inactivation, delayed rectifier activation, adaptation currents, synaptic conductances, etc. Quadratic and exponential integrate-and-fire models reproduce neuronal firing dynamics remarkably well, but either assumed a constant threshold or a threshold dynamics that has not been derived from a more detailed Hodgkin-Huxley-type model. Recently, Platkiewicz and Brette (2010, 2011) derived an approximation for the firing threshold by converting the current equation into the form of an exponential integrate-and-fire model and compared it with simulated data of a single conductance-based model. We checked each step of the derivation with simulations of different conductance-based models in order to assess the impact of the involved approximations. Our simulations confirm that the threshold equation in the context of the exponential integrate-and-fire model gives good insights to the dynamical properties of the spiking threshold. However, using the same threshold equation as a threshold on a leaky integrate-and-fire neuron as suggested by Platkiewicz and Brette, 2011, changes the properties of the resulting neuron model qualitatively. This is simply because a leaky integrate-and-fire neuron has qualitatively different firing properties compared to an exponential integrate-and-fire neuron. We conclude that the Platkiewicz-and-Brette threshold equation cannot be used for a leaky integrate-and-fire neuron. Instead generalized dynamic ionic currents should stay in the membrane equation. References 1 Platkiewicz and Brette, 2010 10.1371/journal.pcbi.1000850 2 Platkiewicz and Brette, 2011 10.1371/journal.pcbi.1001129

[T 17] Behaviour of microscopic spiking activity relative to synchronization events in spiking network models: numerical observations and analytical results Christoph Bauermeister, Hanny Keren, Jochen Braun doi: 10.12751/nncn.bc2015.0034

The activity of neuronal cultures in vitro is characterised by extended periods of quiescent activity punctuated by all-or-none synchronisation events termed network spikes (NS). Extending prior work by Tsodyks, Uziel, and Markram [1], we model such networks with biophysically realistic conductance synapses (rather than current synapses). Previously we reported that consistent ordering of microscopic spiking activity is facilitated by certain stochastic connection topologies [2]. We now report that simulated networks with realistic activity dynamics are near a critical point where small variations of topology eliminate NS. More importantly, we now present a theoretical prediction as how the network topology determines which neurons spike consistently ‘early’ relative to a NS. 48

References 1 Tsodyks, M; Uziel, A; Markram, H; "Synchrony Generation in Recurrent Networks with FrequencyDependent Synapses" JNeurosci, 2000, 20:RC50 (1–5). 2 (previous abstract)

[T 18] A computational study on the diversity of neural responses in model neurons’ of realistic size Aubin Tchaptchet, Hans Albert Braun 1. AG Neurodynamics, Institute of Physiology, Deutschhausstrasse 2, 35039 Marburg, Deutschland doi: 10.12751/nncn.bc2015.0035

No neuron reacts in exactly the same way as another one and even responses of the same neuron on exactly identical stimuli exhibit certain variability. For the elucidation of the crucial membrane properties of such neural diversity we have used a simplified HodgkinHuxley type model neuron of which nevertheless allows considering all functionally relevant membrane properties, also taking into account different leak channels and equilibrium potentials. For that, we have replaced the typical unity neuron referring to a unit capacitance of 1 µF/qcm by neurons of realistic size. The general modeling concept is described in details in Postnova et al (2011) and Tchaptchet et al. (2013). Neuronal diversity has been implemented by the addition of random values to the parameters of a mean “standard neuron” with physiologically adequate limitation and interdependencies, e.g. between the membrane size and maximum currents. Analyzing the impact of randomness in diverse neuron parameters revealed that so-called passive membrane properties like the equilibrium potentials and leak conductances can significantly change the neurons’ responsiveness, i.e. in determining whether a neuron is in a silent state or spontaneously firing. In case of voltage-dependent ion currents it is mainly the relation between the activation curves of de- and repolarizing conductances and, again, their overlapping with the leak voltage. Such randomized model neurons have also been implemented in the Virtual Physiology teaching tool “SimNeuron” which additionally allows to selectively change and to examine the impact of specific model parameters (out of altogether 17 parameters), e.g. converting a silent neuron into a pacemaker cell and vice versa (see figure). A fully functioning demo version of “SimNeuron” can be downloaded from www.virtual-physiology.com (Download Center). 49

.

For each neuron, we consider the timing density of its first spike in the context of a NS, relative to the moment of peak density of the NS. This analysis shows that certain privileged neurons (‘pioneer neurons’) spike consistently earlier than other neurons. Moreover, the ordering of ‘first spikes’ between pioneers is highly preserved across NS, even though the overall time course varies from trial to trial (’time-warping’). For each neuron and its specific connectivity, we consider the membrane potential, without resets, in the diffusion approximation of its synaptic inputs. The closer this mean-field potential to the threshold, the ‘earlier’ the corresponding neuron fires, thus identifying ‘pioneer’ neurons. In conclusion, we provide a comprehensive understanding as to how the effective topology of the network determines the privileged class of pioneer neurons.

Posters Tuesday

Slight alterations of membrane parameters (upper diagrams) can significantly change the neuronal dynamics, as shown in the lower traces, where a neuron that originally was in the steady state (lower left trace) has been converted into a pace¬maker ce References 1 Postnova S, Finke C, Huber MT, Voigt K, Braun HA (2011): Conductance-Based Models of Neurons and Synapses for the Evaluation of Brain Functions, Disorders and Drug Effects. In: Biosimulation in Biomedical Research. Springer, Wien - New York, 93 - 126 2 Tchaptchet A, Postnova S, Finke C, Schneider H, Huber MT, Braun HA (2013): Modeling Neuronal Activity in Relation to Experimental Voltage-/Patch-Clamp Recordings. Brain Res 1536: 159-167

[T 19] Spatio-Temporal Linear Response of Spiking Neural Network Models RODRIGO COFRE1 , BRUNO CESSAC2 1. Department of Theoretical Physics, University of Geneva, 24 quai Ernest-Ansermet, 1211 Geneva, Switzerland 2. Neuromathcomp, INRIA, 2004 route des Lucioles, 06902 Sophia Antipolis, France doi: 10.12751/nncn.bc2015.0036

We study the impact of a weak amplitude time-dependent external stimulus, on the collective spatio-temporal spike train statistics produced by a stochastic conductance based integrate-andfire (CIF) neural network model [1]. On phenomenological grounds, this input-output mapping is often represented with models such as the linear-nonlinear (LN) poisson or the so called generalized linear model. Although this approach produces interesting results in terms of prediction, the complete mechanistic description of how the structure of synaptic connectivity and the dynamical properties of the neuronal population influence the spiking response to a stimulus is missing. On theoretical grounds, this problem can be studied using spiking neural network models. Previous works have addressed this problem characterizing the change in firing rates and pairwise correlations in terms of network architecture and dynamics, using linear response techniques [2,3,4]. Here, we propose an alternative formulation based on time dependent Gibbs distributions an linear response theory. Our approach allows to handle spatio-temporal correlations induced by the stimulus [5]. We obtain a formal closed formula, written in terms of correlation functions of the spontaneous dynamics, which can be numerically approximated for the CIF. Especially, we obtained an explicit dependence in the network parameters (synaptic weights). Our main result is an extension of the Fluctuation-Dissipation theorem, which consider the non-stationary dynamics and infinite memory of the system [6]. This work 50

open up new possibilities to explore and quantify the impact of network connectivity on any spatio-temporal pattern of spiking correlations in the presence of time dependent stimulus, which can be particularly relevant in the study of population receptive fields.

.

A: Time dependent stimuli after t∗ B: Network of Integrate and Fire Neurons C: Spiking responses due to neural dynamics and stimulus. In red an arbitrary spatiotemporal pattern we compare statistically before and after the stimuli presentation. Acknowledgements This work was supported by the French Ministry of Research and University of Nice (EDSTIC), INRIA, KEOPS ANR-CONICYT, and European Union Project Brainscales, ERC advanced grant ’Nervi’ and ’Bridges’. References 1 M. Rudolph and A. Destexhe Neural Computation, 18:2146-2210, 2006. 10.1162/neco.2006.18.9.2146 2 N. Brunel and V. Hakim. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11:1621–1671, 1999. 10.1162/089976699300016179 3 J. Trousdale, Y. Hu, E. Shea-Brown, and K. Josic. Impact of network structure and cellular response on spike time correlations. PLoS Comput Biol., 8, 2012. 10.1371/journal.pcbi.1002408 4 V. Pernice, B. Staude, S. Cardanobile S, and S. Rotter. How structure determines correlations in neuronal networks. PLoS Comput Biol., 2011. 10.1371/journal.pcbi.1002059 5 Rodrigo Cofre and Bruno Cessac. Exact computation of the maximum-entropy potential of spiking neural-network models. Phys. Rev. E, 89(052117), 2014. http://dx.doi.org/10.1103/PhysRevE.89.052117 6 Rodrigo Cofré and Bruno Cessac. Dynamics and spike trains statistics in conductancebased integrate-and-fire neural networks with chemical and electric synapses. Chaos, Solitons and Fractals, 50(8):13–31, 2013. 10.1016/j.chaos.2012.12.006

[T 20] Role of connectivity topology in the transition between retrieving and non-retrieving phases in neural networks Ilenia Apicella1 , Silvia Scarpetta1,2 , Antonio de Candia2,3,4 1. Dipertimento di Fisica E. R. Caianiello, Università di Salerno, Via Giovanni Paolo II 132, 84084 Fisciano (Sa), Italy 2. INFN, Unità di Napoli 3. Dipartimento di Fisica, Università di Napoli Federico II, Complesso Universitario di Monte S. Angelo, 80126 Napoli, Italy 4. CNR-SPIN, Unità di Napoli doi: 10.12751/nncn.bc2015.0037

Recent work have confirmed the idea that brain operates near the critical point of a phase transition, and this seems to have some advantages in terms of optimization of dynamical range, information transmission and capacity. We study a Leaky Integrate-and-Fire model whose connectivity was designed in such a manner as to favour the spontaneous emergence of collective oscillatory spatio-temporal patterns of spikes. As a function of the strength of the connections and of the noise acting on the synapses, we observe a transition between a phase in which the network is able to replay the memorized patterns, and a phase in which it is not. Near the transition between the two phases, we find an intermittent behaviour, with the presence of alternating up and down states in the network, as experimentally observed. 51

Posters Tuesday In this work we study the dependence of the transition on the topology of the connections, considering a network in which a fraction of the synapses is shaped by the learning of the patterns, and the rest is randomly wired. We observe that increasing the random fraction of the network the line of non-equilibrium transitions ends at a point, beyond which the firing rate is a continuous function of the network parameters. We study the fluctuations at the transition line, and investigate their scaling with the size of the network and the number of memorized patterns.

[T 21] HCN channel response to GABAB mediated inhibition in multi-compartment models of midbrain dopaminergic neurons Alexander Hanuschkin1,2 , Enrique Perez-Garci3 , Bernhard Bettler3 , Ilka Diester1,2 1. Optophysiology Lab, Department of Biology, University Freiburg, Germany 2. BrainLinks-BrainTools Cluster of Excellence, University Freiburg, Germany 3. Department of Biomedicine, Institute of Physiology, Pharmazentrum, University of Basel, Switzerland doi: 10.12751/nncn.bc2015.0038

Inhibition in the brain is mainly driven by the neurotransmitter gamma-aminobutyric acid (GABA), which hyperpolarizes the cell membrane via activation of its ligand-gated ion channel GABAA or the G protein-coupled receptor GABAB. Hyperpolarization-activated cyclic nucleotide-gated (HCN) channels have a neuro-protective function by preventing prolonged or extreme hyperpolarization of neurons [1]. HCN channels can adapt to stabilize the membrane potential and maintain the neuronal responsiveness [2]. It is conceivable that HCN will also play a neuro-protective role in response to prolonged or extensive GABA exposure. Here we investigate the interaction of GABAB induced hyperpolarization with HCN channels in a multi-compartment NEURON [3] model of midbrain dopaminergic neurons. This approach allows a systematic survey of GABAB-HCN interaction. We found that HCN channel activation in response to GABAB induced hyperpolarization is pronounced for dendritic HCN channel expression. We show that clamping the soma to a membrane potential slightly below the resting potential is sufficient to trigger the HCN channel response due to an inadequate space-clamp [4]. This GABAB-HCN interaction explains the unique characteristics of the GABAB responses described for this neuronal type, i.e. a strong attenuating GABAB-induced GIRK current [5]. Our model findings agree with recent experimental findings in dopaminergic neurons of the ventral tegmental area [6]. Acknowledgements This work was supported by the Swiss National Science Foundation (31003A-152970) and the Bernstein Award Computational Neuroscience to Ilka Diester. References 1 Biel, M., et al. Hyperpolarization-activated cation channels: from genes to function. Physiol Rev 89, 847-885 (2009) 10.1152/physrev.00029.2008 2 Stegen, M., et al. Adaptive intrinsic plasticity in human dentate gyrus granule cells during temporal lobe epilepsy. Cereb Cortex 22, 2087-2101 (2012) 10.1093/cercor/bhr294 3 Hines, M.L. & Carnevale, N.T. The NEURON simulation environment. Neural Computation 9, 1179-1209 (1997) 4 Bar-Yehuda D. & Korngreen A. Space-clamp problems when voltage clamping neurons expressing voltage-gated conductances. J Neurophysiol 99(3), 1127-36 (2008) 10.1152/jn.01232.2007 5 Cruz, H.G., et al. Bi-directional effects of GABAB receptor agonists on the mesolimbic dopamine system. Nat Neurosci 7, 153-159 (2004) 10.1038/nn1181 6 Perez-Garci, E., et al. Submitted (2015).

52

[T 22] A data-driven mean-field-based network model and its application to computational psychiatry Loreen Hertäg1 , Nicolas Brunel2 , Daniel Durstewitz1,3

Within recent years, there has been an increasing demand for biologically highly realistic and large-scale neural network studies, especially within the newly developing area of computational psychiatry. However, inclusion of a lot of physiological and anatomical detail into network models, important for studying the physiological causes of mental disorders, limits a deep understanding of the system’s dynamical mechanisms. Thus, reduced but, in a defined sense, still physiologically valid mathematical model frameworks that allow for analytical access to the network dynamic would be highly desirable. Based on our previous work (Hertäg et al. 2012, 2014), we present a data-driven, mathematical modelling approach which can be used to efficiently study the effect of changes in single cell and network parameters on the steady-state dynamics. For this purpose, we had derived two approximations for the steady-state firing rate of the exponential integrate-and-fire neuron with spike-triggered adaptation by solving the Fokker-Planck equation. Theoretical f-I curves are compared to single neuron simulations for a large number of parameter settings derived from in-vitro electrophysiological recordings of rodent prefrontal cortex (PFC) neurons probed with a wide range of inputs. A mean-field network model of the PFC is then developed which captures a number of physiological properties like different pyramidal and interneuronal cell types, short-term synaptic plasticity and conductance-based synapses with realistic time constants. This mean-field-based model is then used to study the influence of adaptation, short term synaptic plasticity and different interneuron types on the steady-state network dynamics. Furthermore, we will compare dynamics in "healthy wild-type" networks to networks in which neuronal parameters have been estimated from slice recordings of psychiatrically relevant genetic animal models or pharmacological manipulations. Acknowledgements This work was funded by grants from the German ministry for education and research (BMBF) within the framework of the e:Med research and funding concept (01ZX1311A & 01ZX1314E), the BCCN (01GQ1003B), References 1 Hertäg, L., Hass, J., Golovko, T., and Durstewitz, D. (2012). An approximation to the adaptive exponential integrate–and–fire neuron model allows fast and predictive fitting to physiological data. Frontiers in Computational Neuroscience, 6:62. 2 Hertäg, L., Durstewitz, D., and Brunel, N. (2014). Analytical approximations of the firing rate of an adaptive exponential integrate-and-fire neuron in the presence of synaptic noise. Frontiers in Computational Neuroscience, 8:116

53

.

1. Dept. Theoretical Neuroscience, Bernstein-Center for Computational Neuroscience, Central Institute of Mental Health, Medical Faculty, Mannheim, Germany 2. Departments of Statistics and Neurobiology, University of Chicago, Chicago, US 3. School of Computing and Mathematics, Faculty of Science and Environment, Plymouth University, Plymouth, UK doi: 10.12751/nncn.bc2015.0039

Posters Tuesday

[T 23] Spike times predict membrane potential of adaptive LIF neurons Hazem Toutounji1 , Daniel Durstewitz1 1. Department of Theoretical Neuroscience, Central Institute of Mental Health, Bernstein Center for Computational Neuroscience, Medical Faculty Mannheim of Heidelberg University, Square J5, 68159 Mannheim, Germany doi: 10.12751/nncn.bc2015.0040

Due to noise in the underlying dynamics, neural activity unfolds in a stochastic manner. Moreover, recording neural activity provides an additional source of noise, due to imperfect devices. Measurements are also often of a much lower dimension than the underlying dynamics that generates it. A family of statistical models, called StateSpace Models (SSM) (Durbin and Koopman, 2012), provides a general framework for estimating the underlying higher-dimensional stochastic dynamical system from noisy measurements. This allows for predicting unobserved measurements, and for comparing mechanistic models of neural activity according to their predictive power. To highlight these advantages of SSMs, we show that spike times alone are sufficient to estimate the subthreshold membrane potential of an adaptive leaky integrate and fire (aLIF) neuron. The aLIF neuron is a two-dimensional dynamical system with Gaussian noise. It is also injected by a known current that is generated by an Ornstein-Uhlenbeck process. We derive an estimation procedure that finds the maximum a posteriori voltage and adaptation current paths that generate spikes at the observed times, given the aLIF dynamics (Paninski et al., 2010). We will show how the estimated voltage path fits the real membrane potential, and more importantly, will examine how excluding the adaptation current affects this estimate. The derived procedure can be later extended to estimate the parameters of the underlying dynamics. Hence, this work provides the core for a method to estimate the full SSM of more complex neural models, and to apply it to understanding electrophysiological data on the subthreshold level in both single neurons and networks. Acknowledgements This work was funded by the German Ministry for Education and Research (BMBF) within the e:Med framework (01ZX1311A & 01ZX1314E) and by the German Research Foundation (DFG) within the SFB 1134. References 1 Durbin, J. & Koopman, S. J. (2012). Time series analysis by state space methods (2nd edition). Oxford University Press, Oxford, UK. 2 Paninski, L., Ahmadian, Y., & Ferreira, D. G. et al. (2010). A new look at state-space models for neural data. Journal of computational neuroscience, 29.1-2: 107-126. 10.1007/s10827-009-0179-x

54

[T 24]

Autonomous Control of Network Activity

Sreedhar Saseendran Kumar1,2 , Jan Wülfing3 , Leonore Winterer4 , Samora Okujeni1,2 , Joschka Boedecker3 , Ralf Wimmer4 , Martin Riedmiller3 , Bernd Becker4 , Ulrich Egert1,2

Electrical stimulation of the brain is effective in managing the symptoms of a variety of neurological disorders. Despite its success, the therapy suffers from major shortcomings (e.g. side effects) arising partly due to stimulation settings not adapting to the dynamics of brain activity. Recent studies suggest that adaptive approaches may improve therapeutic effectiveness. It remains unclear, however, how to a) best represent network-wide activity as a quantifiable ‘state’ so that a well posed control problem may be formulated, b) find optimal stimulation settings that drive activity towards ‘desired’ states, and how the latter would be defined. To address these questions we propose an autonomous closed-loop paradigm using methods of Reinforcement Learning (RL) (Fig. 1). Since biological and computational complexity currently forbids parameter scanning and algorithm development in vivo, we used cultured neuronal networks grown on microelectrode arrays coupled to an RL based controller as a model system. These networks display spontaneous activity dynamics and respond to electrical stimuli with bursts of action potentials. For this system, we designed two specific tasks that capture challenges relevant for the controller. Our results suggest that in the first task, the controller autonomously learned a stimulus time that balances the trade-off between response strengths and ongoing activity, demonstrating the capacity of autonomous techniques to exploit underlying quantitative relationships to choose optimal actions. In the second task, we tried to identify the best representation of network activity such that responses were always of a pre-defined strength. Features drawn from the pre-stimulus activity history only marginally improve the controller’s ability to clamp responses. This exposes a potential upper-bound for such control strategies imposed likely by limited observability of relevant mechanisms in the feature-space accessible to the controller.

Schematic of the proposed autonomous closed-loop control system. Acknowledgements

55

.

1. Biomicrotechnology, Institute of Microsystems Engineering, University of Freiburg, GeorgesKöhler-Allee 102, 79110 Freiburg, Germany 2. Bernstein Center Freiburg, Hansastraße 9a, 79104 Freiburg, Germany 3. Machine Learning Lab, Institut für Informatik, University of Freiburg, Georges-Köhler-Allee 079, 79110 Freiburg, Germany 4. Chair of Computer Architecture, University of Freiburg, Georges-Köhler-Allee 051, 79110 Freiburg, Germany doi: 10.12751/nncn.bc2015.0041

Posters Tuesday Supported by BrainLinks-BrainTools Cluster of Excellence (DFG, EXC 1086), BMBF (FKZ 01GQ0830) and the EU (NAMASEN #264872).

[T 25] Neuronal Connectivity Options along the Edge of Bounded Neural Networks – Analysis of Network Structure and Dynamics Ehsan Safavieh1,2,3 , Sarah Jarvis1,2,4 , Stefan Rotter2,3 , Ulrich Egert1,2 1. Biomicrotechnology, Institute of Microsystems Engineering, University of Freiburg, Freiburg, Germany 2. Bernstein Center Freiburg, Freiburg, Germany 3. Faculty of Biology, University of Freiburg, Freiburg, Germany 4. Department of Bioengineering, Imperial College London, London, UK doi: 10.12751/nncn.bc2015.0042

Neurons which cannot make enough afferent or efferent connections will not survive in neural networks [1, 2]. In biological neural networks in-vivo this is found under pathological situations such as stroke areas or epileptic foci. In-vitro, neurons located at the boundary of cultured networks encounter a special situation where putative partners are available in one direction only. Therefore, axons of neurons located near the edge respond in different ways to increase their connectivity within the boundary. In biologically realistic scenarios, axons reaching the margin may follow the network’s edge, bifurcate and expand to different directions, stop growing or have additional branches emerging from the soma. We analyzed three edge schemes to map these situations in bounded balanced neural networks with anisotropic axons. Using this model [3, 4], we simulated 12500 neurons in a circular network (radius 1.5 mm, simulated in NEST [5]), where neuronal somata self-organize into clusters and anisotropic axons of excitatory neurons join to create bundles. We compared structural and dynamical properties of the network emerging from these three edge correction mechanisms (Fig. 1). Changing the connectivity of the neurons on the edge has pronounced effects on structural features of the network such as degree distributions, local excitation/inhibition balance, and the Eigenvalue spectra of the connectivity matrices. Consequently, network stability, average firing rates, burst durations, and inter burst intervals were different. With the “To Edge” algorithm (Fig. 1) some neurons could not create enough connections and artificial properties in network structure arose. “Mini-Axons” provided equal chance for all neurons to establish connections, but highly active spots emerged due to over-excitation at the edge. The “Axon-Width” algorithm balanced realistic structural and dynamical aspects of the network.

Fig. 1 – (a) Initial random axon direction. (b) “To Edge” method: rotate axons. (c) “Axon Width”: Axon field is widened to all more connections (d) “Mini Axons”: Axons are cropped and grafted. In all methods the sum of covered area is the same. Acknowledgements Supported by DAAD, BrainLinks-BrainTools Cluster of Excellence (DFG, EXC 1086), BMBF (FKZ 01GQ0830), TIGER, funded by INTERREG IV Rhin Supérieur program, the EU (NAMASEN #264872), and EU (FEDER, #A31)

56

.

References 1 Sanes J, Jessell T: The Formation and Regeneration of Synapses. In Principles of Neural Science. New York, USA: McGraw-Hill; 2000. 2 Jacobson M, Weil M, Raff M: Programmed Cell Death in Animal Development. Cell 1997, 88:347–354. 10.1016/S0092-8674(00)81873-5 3 Jarvis S, Okujeni S, Kandler S, Rotter S, Egert U: Axonal anisotropy and connectivity inhomogeneities in 2D networks. BMC Neurosci 2012, 13 (Suppl 1):P145. 10.1186/1471-2202-13-S1-P145 4 Jarvis S: On the necessity of structural heterogeneities for the emergence of stable and separable dynamics in neuronal networks. Albert-Ludwig University of Freiburg; 2012. 5 Gewaltig M-O, Diesmann M: NEST (NEural Simulation Tool). Scholarpedia 2007, 2:1430. 10.4249/scholarpedia.1430

[T 26]

Spiking neural network model of idle rhythmic activity

Alexander Simonov1,2 , Pavel Esir1,2 1. Dept. of Theory of Oscillations and Automatic Control, N.I.Lobachevsky State University of Nizhny Novgorod, 23 Gagarin ave., Nizhny Novgorod, Russia 2. Neuroscience Center of Institute of Biology and Biomedicine, N.I.Lobachevsky State University of Nizhny Novgorod, 23 Gagarin ave., Nizhny Novgorod, Russia doi: 10.12751/nncn.bc2015.0043

Neural network of the brain generate various types of oscillatory patterns [1]. Among them alpha band activity is considered as a fingerprint of a resting or idling state of the brain. Event-related synchronization and desynchronization were observed during motor tasks and can be used for imaginary movement-based brain-computer interfaces. To investigate network mechanisms of alpha-rhythms generation and explore possible ways of control them we constructed a simple model of spiking neural network. The minimal set of features in the modelled network required to reproduce rhythmic activity with desired frequency properties was determined. The model consists of three neuronal populations, one of them is excitatory and the other two are inhibitory. We used Izhikevich neurons fine-tuned to fit spiking patterns and excitability properties of three basic types of cortical neurons: pyramidal neurons (PN), fast-spiking cells (FSC) and low-threshold spiking interneurons (LTS) [2]. Wavelet and Fourier transformations were applied to total spike rate traces as well as to field potential traces to investigate network signalling in the time-frequency domain. It appears that the network acts in a filter-like manner converting broadband spectrum of its inputs to alpha band population activity. Figure 1 illustrates generating and breaking alpha band activity in the model. Spontaneously generated alpha band activity disappears with applying input drive to the FSC population after 4 sec. From top to bottom panel: 1) Raster plot of network activity. Blue, green and red dots represent spikes of PN, LTS and FSC populations respectively. 2) TSR trace. 3) Wavelet spectrogram of TSR. 4) Fourier spectra of TSR before (left panel) and during (right panel) stimulation of FSC population. We also applied phase plane analysis to a developed rate-based model incorporating two distinct types of inhibition and reproducing alpha range bandpass filter properties. 57

Posters Tuesday

Generating and breaking alpha band activity in spiking neural network. Acknowledgements The work was supported by the Russian Science Foundation (proj. No.14-11-00693), by the Ministry of Education and Science of Russia (proj. Nos.14.581.21.0011, 14.578.21.0074 and 14.578.21.0094) References 1 Buzsaki, G. (2006). Rhythms of the Brain. Oxford University Press, USA 2 Izhikevich, E. M. (2007). Dynamical systems in neuroscience: the geometry of excitability and bursting. Cambridge, Mass: MIT Press

[T 27] Network response theory and its implication for thalamocortical interactions Farzad Farkhooi 1. Institut für Mathematik, Technische Universität Berlin, Str. des 17. Juni 136, 10623 Berlin, Germany 2. Bernstein Center for Computational Neuroscience, Philippstr. 13, Haus 6, 10115 Berlin, Germany doi: 10.12751/nncn.bc2015.0044

The response of complex neuronal networks to a transient input is a basic component for understanding of the computations in nervous systems. In this study, we develop a theory (following the Kube-Green formulas [1]) that it can predict the behavior of a network after it is transiently perturbed. The theory uses the simplification that neurons are rate units and the connectivity pattern is arbitrary general, but sufficiently weak. We first self-consistently determine the ensemble mean input, the fluctuation variance and the averaged pairwise correlations within the network at its fix point. Given the local stability condition, we linearize and derive the eigenvalue equation of the system. The product of corresponding eigen-functions and perturbation vector provides the response profile of the system. The theory provides a quantitative insight in the possible dynamics that can emerges due to input perturbation in a realistic neuronal network. We further applied the theory to a concrete model-example of neo-cortex to study the transient dynamic of thalamic inputs into the cortical unit at its asynchronous fix point. The results predict that a perturbation vector that is consistent with experimental evidence that the thalamus provide a powerful feed-forward inhibition [2] that produce a rich complex dynamics in the cortical network (Fig.1). Interestingly, these complex dynamics are experimentally has been observed in motor system [3], where thalamocortical interactions are known to be a vital element of the system. The theory also 58

.

predicts some properties of motor tuning in the cortical units that can be experimentally observed. Taken together, our theory could be a base for understanding of the non-equilibrium dynamics and organizational principle in the realistic network of neurons and the basic computations in the sensory and motor systems.

Complex dynamics emerge where the Thalamus provides a powerful feed-forward inhibition transient it to cortex. Responses of in cortex where the transient activates (a) no cells, (b) only exc cells, (c) exc-inh in balanced manner and (d) with FFI mech Acknowledgements FF would like to thank Yifat Prut for discussions on the role of feed-forward inhibition in the motor system. This work was supported by the BMBF, FKZ 01GQ1001. References 1 R. Kubo, J. Phys. Soc. Jpn. 12:570 (1957) ; M. S. Green, J. Chem. Phys., 20:1281 (1952). 2 S. J. Cruikshank, et al., Nat Neurosci, 10:462–468 (2007). 3 M. M. Churchland, et al., Nature , 487:51–56 (2012).

[T 28] Adaptive coincidence detection in pyramidal neurons: in vitro experiments and theory Christian Pozzorini1 , Skander Mensi1 , Olivier Hagens1 , Wulfram Gerstner1 1. Ecole Polytechnique Fédérale de Lausanne, Brain Mind Institute, Switzerland doi: 10.12751/nncn.bc2015.0045

In the neurosciences, there are two different schools of thought regarding the function of a single neuron: it could either work as a coincidence detector or as a temporal integrator. While integrators are well suited for rate coding, coincidence detectors respond selectively to synchronous spikes, thereby supporting neural codes in which the precise timing of individual spikes matters. Questioning the possibility of robust temporal codes, standard theories predict that, in response to increasingly strong inputs, cortical neurons switch their behavior from coincidence detection – in the subthreshold regime – to temporal integration – in the suprathreshold regime. Combining in vitro experiments and modeling, we demonstrate that cortical pyramidal neurons adaptively adjust the effective timescale of somatic integration in order to operate as coincidence detectors, regardless of their working regime. Surprisingly, we found that the temporal window over which synaptic inputs are integrated is not entirely 59

Posters Tuesday controlled by the membrane timescale, but adapts to the input statistics as a result of the firing threshold dynamics. Our experimental findings are captured and explained by a new Generalized Integrate-and-Fire model which: i) Can be efficiently fitted to intracellular recordings using a new maximum likelihood procedure; ii) Outperforms state-of-the-art models in predicting individual action potentials with millisecond precision; iii) Captures complex forms of single-neuron adaptation over a broad range of input statistics; iv) Can be analytically mapped to an enhanced Generalized Linear Model (GLM) in which both the input filter and the spike-history filter dynamically adapt to the input statistics.

[T 29] Spatial patterns of beta oscillations and their relation to power in macaque motor cortex Michael Denker1 , Lyuba Zehl1 , Bjørg Kilavik2 , Markus Diesmann1 , Thomas Brochier2 , Alexa Riehle2,3,4 , Sonja Grün1,3,5 1. Inst. of Neuroscience & Medicine (INM-6) and Inst. for Advanced Simulation (IAS-6), Jülich Research Centre & JARA, Jülich, Germany 2. Institut de Neurosciences de la Timone (INT), CNRS, Aix-Marseille Univ., Marseille, France 3. RIKEN Brain Science Institute, Wako-Shi, Japan 4. Inst. of Neuroscience & Medicine (INM-6), Jülich Research Centre, Jülich, Germany 5. Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany doi: 10.12751/nncn.bc2015.0046

During states of increased arousal, motor preparation, and postural maintenance, the local field potential (LFP) in primary motor (M1) and premotor (PM) cortex typically exhibits oscillations in the beta (12-40 Hz) range [1]. Beta oscillations recorded on separate electrodes are often highly correlated, but exhibit a non-zero phase shift. These shifts were shown to organize spatially in the form of planar wave propagation along preferred directions across the cortical surface during an instructed-delay reaching task [2]. Here we demonstrate that in monkey motor cortex a variety of additional spatial patterns of LFP beta activity may be distinguished outside epochs that exhibit a clear planar wave (see also [3]). We analyzed LFP data recorded by a 10-by-10 Utah electrode array (Blackrock Microsystems), which was chronically implanted in M1 and dorsal PM covering an area of 4x4mm2 . The recordings were performed while the monkey was involved in a delayed reach-to-grasp task [4]. Based on the instantaneous phase and phase gradients of the beta-filtered LFPs across the array, we introduce and combine measures to identify different spatial activity patterns: (i) planar waves, (ii) quasi-stationary states (LFPs at all electrodes appear synchronized at near-zero lag), (iii) spatially unstructured states, and (iv) more complex patterns, including circular and radial propagation. We assess the statistical properties of the patterns, including their duration and average direction. In particular, we relate the observed patterns to beta-spindles identified by large instantaneous amplitudes. We find that the wave pattern correlates with the beta power, where the peak of spindles typically coincides with a quasi-stationary state. In combination with previous results [5], this raises the hypothesis that beta power is indicative of spatio-temporal organization of spike synchronization. Acknowledgements Helmholtz Portfolio Theme Supercomputing and Modeling for the Human Brain (SMHB), EU grant 604102 (HBP), G-Node (BMBF Grant 01GQ1302), ANR-GRASP, Neuro_IC2010, CNRS-PEPS, RikenCNRS Research Agreement

60

References 1 Kilavik et al. (2012) Cereb Cortex 22:2148 2 Rubino et al. (2006) Nat Neurosci 9:1549 3 Townsend et al (2015). J Neurosci, 35:4657 4 Riehle et al. (2013) Front Neural Circuits 7:48 5 Denker et al. (2011) Cereb Cortex 21:2681

[T 30] A mechanistic cortical microcircuit of attention for amplification, normalization and suppression Frederik Beuth1 , Fred H. Hamker1

Computational models of visual attention [1,2,3,4,5,6] have replicated a large number of data from visual attention experiments. However, typically each computational model has been shown to account for only a few data sets. Thus, a general account to fully understand the attentive dynamics in the visual cortex is still missing. To reveal a set of general principles that determine attentional selection in visual cortex, we developed a novel model of attention, particularly focused on explaining single cell recordings in multiple brain areas. Among those are spatial- and feature-based biased competition, modulation of the contrast response function, modulation of the neuronal tuning curve and modulation of surround suppression. Neurons are modeled by a dynamic rate code. In contrast to previous models, we use a two layer structure inspired by the layered cortical architecture which implements amplification, divisive normalization and suppression as well as spatial pooling. Twelve different attentional effects have been simulated which comprises the most known effects of attention at the neuronal response. As a proof of concept the model has been fitted to twelve different data sets. Concluding, our model proposes that attentional selection emerges from three basic neural mechanisms which are amplification, normalized feature and surround suppression. We hypothesize that these attentive mechanisms are not distinct from other neural phenomena and thus also contribute to multiple perceptual observations such as crowding and feature inheritance.

Model and Results Acknowledgements Founded by the DFG grant no. HA2630/6-1, partly by the European Project “Spatial Cognition" (grant no. 600785), and partly by the Research Training Group “Cross-Worlds” (No. GRK1780).

61

.

1. Artificial Intelligence, Chemnitz University of Technology, Strasse der Nationen 62, Chemnitz doi: 10.12751/nncn.bc2015.0047

Posters Tuesday References 1 Reynolds & Heeger, 2009, Neuron 10.1016/j.neuron.2009.01.002 show 2 Boynton, 2009, Vision Res 10.1016/j.visres.2008.11.001 3 Lee & Maunsell, 2009, PLoS One 10.1371/journal.pone.0004651 4 Ni & Maunsell, 2012, Neuron 10.1016/j.neuron.2012.01.006 5 Spratling, 2008, Vision Res 10.1016/j.visres.2008.03.009 6 Wagatsuma et al., 2013, PLoS One 10.1371/journal.pone.0080788

[T 31]

Distribution of pair-wise covariances in neuronal networks

David Dahmen1 , Markus Diesmann1,2,3 , Moritz Helias1 1. Inst. of Neuroscience and Medicine (INM-6) and Inst. for Advanced Simulation (IAS-6), Jülich Research Center and JARA, Jülich, Germany 2. Dept. of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany 3. Dept. of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany doi: 10.12751/nncn.bc2015.0048

Despite the large amount of shared input between nearby neurons in cortical circuits, pairwise covariances in ensembles of spike trains are on average close to zero [1]. This has been well understood in terms of active decorrelation by inhibitory feedback in networks of infinite [2] and finite size [3]. These works derived analytical expressions that explain how structural parameters of the network as well as its operational regime determine correlations averaged over many neuron pairs. However, with the advent of massively parallel spiking recordings, experiments also show large variability of covariances across pairs of neurons. Whereas a considerable fraction of this variability may be due to the estimation of correlations from data with limited observation time [1], structural variability of direct and indirect connections between neurons contributes non-trivially. The relation between the frozen variability of the network structure and the correlated dynamics can be readily observed in networks of linear rate models (Fig. 1), but a theoretical understanding is so far lacking. In the current study, we derive an analytical expression for the width of the distribution of integral pairwise covariances. We make use of the fact that correlations in leaky integrate-and-fire and binary networks are well approximated by effective linear rate models [4, 5], and combine ideas from spin-glass theory [6] with a generating function representation for the joint probability distribution of the network activity [7] to obtain a functional relation of the variance of integral pairwise covariances on parameters of the network connectivity. The expression explains the divergence of the mean covariances and their width when the critical coupling wcrit is approached, the point at which the linear network looses stability. Using this relation, distributions of correlations can provide insights into the properties of the underlying network structure and its operational regime. 62

Acknowledgements Partly supported by Helmholtz Portfolio Supercomputing and Modeling for the Human Brain (SMHB), the Helmholtz young investigator group VH-NG-1028, EU Grant 269921 (BrainScaleS), EU Grant 604102 (HBP). References 1 Ecker et al. (2010) Decorrelated neuronal firing in cortical microcircuits. Science 327: 584–587. 10.1126/science.1179867 2 Renart et al. (2010) The asynchronous state in cortical cicuits. Science 327: 587–590. 10.1126/science.1179850 3 Tetzlaff et al. (2012) Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol 8:e1002596. 10.1371/journal.pcbi.1002596 4 Grytskyy et al. (2013) A unified view on weakly correlated recurrent networks. Front Comput Neurosci 7 10.3389/fncom.2013.00131 5 Pernice et al. (2012) Recurrent interactions in spiking networks with arbitrary topology. Phys. Rev. E 85:031916. 10.1103/PhysRevE.85.031916 6 Sompolinsky and Zippelius (1982) Relaxational Dynamics of the Edwards-Anderson Model and the Mean Field Theory of Spin Glasses. Phys. Rev. B 25, 6860. 10.1103/PhysRevB.25.6860 7 Chow and Buice (2015) Path Integral Methods for Stochastic Differential Equations. J Math Neurosci 2015, 5:8 10.1186/s13408-015-0018-5

[T 32] Simulations on different electrode arrangements of microelectrode arrays Inkeri Vornanen1 , Kerstin Lenk1 , Jari A. K. Hyttinen1 1. Department of Electronics and Communications Engineering, BioMediTech, Tampere University of Technology, Finnmedi 1 L 4, Biokatu 6, 33520 Tampere, Finland doi: 10.12751/nncn.bc2015.0049

Neuronal networks are often studied in vitro using microelectrode arrays (MEAs), where neurons are cultured on top of an electrode grid, and the action potentials of the neurons can be recorded. A typical MEA has 60 electrodes with distances of 50-200 microns to each other. However, usually neuronal networks consist of thousands of neurons, so only a small sample of the neurons in the network is recorded. In this study, we inspected how well different typical electrode arrangements can capture the network behavior. To compare various electrode arrangements and the actual neuronal activity, we simulated neuronal networks using the INEX model [1], which consists of spontaneously active excitatory and inhibitory neurons. 1005 neurons were positioned in a grid inside a circle with a 1mm radius and connected to 100 nearest neighbors. To model various MEA electrode ensembles, we chose different subsets of neurons to analysis: 1) every 1-10th neuron in the network, 2) the neurons on the edges and in the center of the network, and 3) different sized grid formations: 3x3=9, 8x8=64 and 16x16=256 neurons with 63

.

Integral auto- and cross-covariances of inh. Bernoulli network of linear rate units (N=1000, p=0.1). (A,C) density of occurrences. (B,D) mean and width of covariance distribution (dots: numerical results, lines: theoretical predictions)

Posters Tuesday different distances. For these sets of selected neurons, we calculated the spike and burst rates assessed by the CMA algorithm [2], and compared those between the different sets. In the simulations the neurons on the edges spike and burst less than the neurons in the middle due to different neighborhood of connections. This resembles real biological networks, where some parts of the network can be more active than others. A lower number of analyzed neurons typically resulted in a lower variability of spike rates, which in some cases caused erroneous median values compared to the activity of the whole network. Also, the better the electrodes cover the entire area of network, the better the recorded neurons represent the behavior of the network, thus even low number of electrodes (3x3 grid) can provide sufficient result if it covers the entire area of the network. Acknowledgements This research has been supported by the 3DNeuroN project in the European Union’s Seventh Framework Programme, Future and Emerging Technologies, grant agreement n°296590. References 1 Lenk K: A Simple Phenomenological Neuronal Model with Inhibitory and Excitatory Synapses. In Advances in Nonlinear Speech Processing. 2011:232–238 2 Kapucu FE, Tanskanen JMA, Mikkonen JE, Ylä-Outinen L, Narkilahti S, Hyttinen JAK: Burst Analysis Tool for Developing Neuronal Networks Exhibiting Highly Varying Action Potential Dynamics. Front Comput Neurosci 2012, 6 10.3389/fncom.2012.00038

[T 33] Frequency dynamics of ripples in hippocampal interneuronal networks José R. Donoso1,2 , Nikolaus Maier3 , Dietmar Schmitz2,3 , Richard Kempter1,2 1. Institute for Theoretical Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany 2. Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany 3. Neuroscience Research Center, Charité-Universitätsmedizin, 10117 Berlin, Germany doi: 10.12751/nncn.bc2015.0050

Hippocampal sharp wave-ripples (SWRs) have been implicated in memory consolidation. A SWR is characterized by fast network oscillations (200 Hz ‘ripples’) superimposed by a slow sharp wave (≤ 50 Hz). During such events, unit-firing is phase-locked to the ripple oscillation. The mechanisms by which such an entrainment occurs remain unclear. An interneuron network of parvalbumin-positive basket cells has been proposed as a putative pacemaker, but the resistance of the ripple frequency to GABAergic modulators and the phenomenon of intra-ripple frequency accommodation are not well understood. Here we analyze the response of a physiologically constrained in silico model of a CA1 interneuron-network to both persistent and transient stimulation. If the network is in the regime of sparsely synchronized oscillations, the ripple frequency is insensitive to changes of the GABAA receptor peak-conductance and decay time-constant. When the network is driven by an excitatory burst resembling CA3 pyramidal cell activity during SWR, its transient response exhibits intra-ripple frequency accommodation. Such phenomenon can be explained by the dynamically changing state of synchrony of the interneuronal network. Our model thus predicts a peculiar relationship between the time course of excitatory input and the instantaneous frequency within single SWR events. Finally, we present data from paired-cell recordings in vitro that supports this prediction.

64

[T 34]

Generation of spikelets in cortical pyramidal neurons

Martina Michalikova1 , Michiel Remme1 , Richard Kempter1,2

Spikelets are brief all-or-none depolarizations of small amplitudes (< 20 mV) that can be measured in somatic intracellular recordings. Pronounced spikelet activity was demonstrated in hippocampal pyramidal neurons in awake behaving [1, 2] and anesthetized animals [3]. However, spikelets rarely occur in vitro, and basic mechanisms underlying their generation in pyramidal neurons are not well understood. Here, we explore the generation of spikelets using mathematical analysis and numerical simulations of compartmental single-neuron models. We show that somatic spikelets can be evoked by weak orthodromic (somatic) inputs that initiate spikes at the axon initial segment (AIS), which propagate down the axon, but do not backpropagate to the soma and the dendrites. Spikelet occurrence depends on a sufficiently large difference in spiking thresholds between the soma and the AIS, and additionally, the impedance mismatch and electrotonic separation between the soma and the AIS. Isolating the cell parameters controlling spikelet generation allowed us to identify possible causes of spikelet absence in in vitro preparations: First, the dendritic current sink in vitro is diminished due to “dendritic pruning” in slices. Next, the fraction of sodium channels available for somatic spiking is usually larger in vitro because of lower spiking activity and a lower resting membrane potential. Finally, the difference in activation voltages between somatic and axonal sodium channels under in vitro conditions might be smaller than under in vivo conditions as the activation voltage of sodium channels might be controlled by neuronal activity, which is typically much higher in vivo than in vitro. In our models, somatic spikelets represent output spikes that do not activate the soma and the dendrites. Consequently, such a mechanism might be involved in the control of dendritic plasticity and in the regulation of somato-dendritic firing rates without affecting the axonal output of a neuron. Acknowledgements This work was supported by the Einstein Foundation Berlin and the German Federal Ministry of Education and Research (01GQ0901, 01GQ1001A, 01GQ0972). References 1 Epsztein J, Lee AK, Chorev E, Brecht M: Impact of spikelets on hippocampal CA1 pyramidal cell activity during spatial exploration. Science 2010, 327:474-7. 2 Harvey CD, Collman F, Dombeck DA, Tank DW: Intracellular dynamics of hippocampal place cells during virtual navigation. Nature 2009, 461:941-6. 3 Chorev E, Brecht M: In vivo dual intra- and extracellular recordings suggest bidirectional coupling between CA1 pyramidal neurons. J Neurophysiol 2012, 108:1584-93.

65

.

1. Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany 2. Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany doi: 10.12751/nncn.bc2015.0051

Posters Tuesday

[T 35] Coordination of innate behaviours by GABAergic cells in lateral hypothalamus M. Carus-Cadavieco1 , M. Gorbati1 , S. van der Veldt1 , F. Ramm1 , A. Ponomarenko1 , T. Korotkova1 1. AG Behavioral Neurodynamics, Leibniz-Institut für Molekulare Pharmakologie (FMP) / Neurocure Cluster of Excellence, Charité Campus Mitte, Berlin, Germany doi: 10.12751/nncn.bc2015.0052

Lateral hypothalamus (LH) is crucial for the regulation of innate behaviors, including food intake and sleep-wake cycle, yet temporal coordination of hypothalamic neuronal populations remains elusive. Here we used combination of high-density electrophysiological recordings and optogenetics in behaving mice to study neuronal circuits involved in transitions between innate behaviors as well as function of GABAergic cells in LH. Excitatory (ChETA) or inhibitory (halorhodopsin, eNpHR3.0) opsins were expressed in LH of VGAT-Cre mice to ensure selective targeting of GABAergic cells. Recordings of neuronal activity and optostimulation were performed in various behavioral paradigms assessing innate behaviors, including a goal-directed behavior and a "free-will" environment where an animal could choose between compartments with food, water, enriched environment and home-cage like enclosure. We found that optogenetic stimulation of GABAergic LH cells at various frequencies as well as stimulation of projections of these neurons changed transitions between innate behaviors. Activation of GABAergic neurons in LH increased food intake in satiated mice and decreased latency to feeding whereas optogenetic inhibition of LH GABA cells decreased feeding even despite food deprivation. Furthermore, neuronal activity of LH neurons as well as oscillatory coordination between LH and its inputs were behavior- and state-dependent. We now characterize selective roles of various projections of LH GABA cells in innate behaviors. Acknowledgements We thank A. Adamantidis and C. Gutierrez Herrera. We gratefully acknowledge support by the Human Frontier Science Program (RGY0076/2012), NeuroCure Cluster of Excellence, and the DFG (SPP 1665)

[T 36] Network structure determines the impact of higher-oder correlations on the evoked activity dynamics in spiking neuronal networks David Huebner1 , Luiz Tauffer1,2 , Arvind Kumar1 1. Computational Biology, KTH Royal Institute of Technology, Stockholm, Sweden 2. Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany doi: 10.12751/nncn.bc2015.0053

Given that a typical stimulus response involves co-activation of 100s of neurons, it is expected that neurons will express pairwise and higher-order correlations (HoC) and can be measured experimentally. However, pair-wise correlations are sufficient to explain 90% of the total correlation structure (1,2) suggesting that the HoC structure has a very limited influence on the network dynamics. The impact of feedforward input heavily depends on the input statistics (3) and the network structure (4), therefore, here we investigated the effect of higher-order input correlations on the response of three commonly found network motifs in the brain. To this end we generated different feed-forward input distributions with identical individual and pairwise rates, but different HoC structure. To introduce HoCs we used three 66

References 1 Schneidman E et al. (2006) Weak pairwise correlations imply strongly correlated network states in a neural population? Nature 440:1007-1012. doi:10.1038/nature04701 2 Tang A et al. (2008) A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J Neurosci. 28(2):505-518 doi: 10.1523/JNEUROSCI.3359-07.2008 3 Bujan AF, Aertsen A, Kumar A (2015) Role of input correlations in shaping the variability and noise correlations of evoked activity in the neocortex. in press J. Neurosci. 10.1523/JNEUROSCI.453614.2015 4 Vlachos I, Aertsen A, Kumar A (2012) Beyond statistical significance: Implications of network structure on neuronal activity. PLoS Comput Biol 8(1): e1002311. doi:10.1371/journal.pcbi.1002311

[T 37] Origin and control of dynamic instability in a computational model of epileptic network Christopher Kim1 , Antje Kilias1,2 , Ajith Sahasranamam1 , Stefan Rotter1 , Ulrich Egert1,2 , Arvind Kumar1,3 1. Bernstein Center Freiburg, University of Freiburg, Germany 2. Department of Microsystems Engineering, IMTEK, University of Freiburg, Germany 3. School of Computer Science and Communications, KTH, Royal Institute of Technology, Stockholm, Sweden doi: 10.12751/nncn.bc2015.0054

Human patients and animal models of mesial temporal lobe epilepsy exhibit pronounced mossy fiber sprouting in dentate gyrus and spatially graded inhibition. In this study, we considered a computational model of recurrently connected excitatory and inhibitory neurons [2] to investigate the effects of mossy fiber sprouting and differential inhibition on the network dynamics. Mossy fiber sprouting is modeled by allowing a subpopulation of excitatory neurons to produce additional recurrent connection to itself, and the effects of differential inhibition is examined by introducing two levels of inhibition in the network. We derive a self-consistent equation for the stationary rate of the network using Fokker-Planck formalism and study its global properties with XPPAUT [3]. Analytical predictions were verified using network simulations. Main results of the study are that (1) differential inhibition can create dynamic instability despite that the network is in inhibition-dominant regime, (2) promoting activity in non-sprouting neurons by either reducing inhibitory synaptic strength or applying external 67

.

different models: A maximum entropy distribution with maximal entropy and non-zero higher order cumulants; a distribution with zero higher order cumulants and a binomiallike distribution with negative third order cumulant and smaller entropy. To study the interplay between input statistics and network architecture, we measured the response to different HoC structures in three different random recurrent network structures: (1) a neocortex type network of excitatory and inhibitory neurons with distance dependent connectivity, (2) a CA1 type network with no recurrent excitation and (3) a purely inhibitory network with no excitatory neurons. In networks with recurrent excitation, each input HoC structure induces a different response. By contrast, networks without recurrent excitation respond with similar activity for each input HoC structure. Thus, input HoC structure do have a strong influence on the network response, provided there are recurrent excitatory connections. These results suggest that in brain structures such as the CA1 and striatum which lack recurrent excitation, the HoC structure may not be necessary to determine the evoked responses.

Posters Tuesday input can suppress the pathological oscillatory activity, and (3) when situated near the bifurcation point, the network exhibits intermittent transition to the oscillatory state. By examining the nature of intermittent transition to oscillation, we find that sprouting neurons drive the oscillatory activity, and the switching frequency can be predicted from an one-dimensional dynamics. We also discuss the increase in the variance of population activity as the bifurcation point is approached. Acknowledgements We thank Katharina Heining for fruitful discussions. This work was supported by the German Research Foundation and the INTERREG IV Rhin superieur program and European Funds for Regional Development. References 1 Marx M, Haas C, Haeussler U (2012) Differential vulnerability of interneurons in the epileptic hippocampus. Front. Cell. Neurosci. 7:167. 2 Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comp. Neurosci., 8(3):183-208. 3 Ermentrout B (2002), Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers and Students, SIAM, Philadelphia, USA.

[T 38] Sensitivity of control nodes of biological neuronal networks to progressive edge pruning Simachew Mengiste1,2 , Ad Aertsen1 , Arvind Kumar1,2 1. Faculty of Biology, Bernstein Center Freiburg, Hansastr. 9a, 79104, Freiburg, Germany 2. Computational Biology, School of Computer Science and Communication, Lindstedtsvägen 24, 114 28, Stockholm, Sweden doi: 10.12751/nncn.bc2015.0055

From the perspective of the brain as a dynamical system, it should be possible to drive a particular brain activity state to another, more desired state, for both, therapeutic benefits and to understand brain function. Structural controllability provides the key for identifying the essential control nodes that can drive the network activity to any arbitrary state, when activated with an appropriate input [1,2]. Biological neuronal networks (BNN), like many other complex networks, are not static in their structure and connections change with development, activity-dependent plasticity and diseases. Therefore, it is important to understand how changing the structure of the network affects the control profile of the network. To this end, we designed four different strategies to prune edges and, thereby, change the network structure. We then estimated the change in the control node count as a function of edge pruning for several complex random networks: Erdos-Renyii, smallworld, scale-free and BNNs [3-5]. Different networks showed different sensitivity and/or resilience to each of the pruning strategies. We compared the change in control nodes count as a function of network pruning and found that the BNNs, rather than small-world networks, resemble the local attachment-type of random networks. These results provide a new perspective on synapto- or neuro-degenerative diseases. Specifically, they suggest that diseases like Alzheimer’s could be understood as a change in the controllability of the network, because the progressive loss of synapses would increase the control node count and/or change the control profile of the leftover network. Acknowledgements Supported by Erasmus Mundus Joint Doctoral programme EuroSPIN, BMBF 01GQ0420 to BCCN Freiburg, BrainLinks-BrainTools - DFG, grant number EXC 1086, the EU (INTERREG-V Grant to Neurex: TIGER) and DAAD

68

References 1 Ching-Tai Lin, "Structural controllability," Automatic Control, IEEE Transactions on , vol.19, no.3, pp.201,208, Jun 1974 doi:10.1109/TAC.1974.1100557 2 Liu, Yang-Yu, Jean-Jacques Slotine, and Albert-László Barabási. "Controllability of complex networks." Nature 473.7346 (2011): 167-173. doi:10.1038/nature10011 3 Oh, Seung Wook, et al. "A mesoscale connectome of the mouse brain." Nature 508.7495 (2014): 207-214. doi:10.1038/nature13186 4 Watts, Duncan J., and Steven H. Strogatz. "Collective dynamics of ‘small-world’networks." nature 393.6684 (1998): 440-442. doi:10.1038/30918 5 Ruths, Justin, and Derek Ruths. "Control profiles of complex networks." Science 343.6177 (2014): 1373-1376. doi:10.1126/science.1242063

.

[T 39] Investigation of dendritic integration in CA1 pyramidal neurons using detailed biophysical models Sára Sáray1,2 , Tamás F. Freund1,2 , Szabolcs Káli1,2 1. Institute of Experimental Medicine, Hungarian Academy of Sciences, Szigony u. 43., Budapest 1083, Hungary 2. Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter u. 50/A, Budapest 1083, Hungary doi: 10.12751/nncn.bc2015.0056

Thin dendrites receive the majority of synaptic inputs in cortical pyramidal neurons, and, due to the action of voltage-dependent components, they integrate incoming signals nonlinearly. However, because of the poor accessibility of these dendrites, experimental data about the properties of dendritic nonlinearities are scarce. We have developed morphologically and biophysically detailed models of CA1 pyramidal cells to investigate the integrative properties of radial oblique dendrites, where the nonlinearity has been associated with the generation of sodium spikes and/or the activity of NMDA receptors. Na+ spikes increase the slope of the rising phase of somatic EPSPs, while NMDA receptors contribute to the maximal amplitude of the EPSP, and in some cases may generate long-lasting, high amplitude plateau potentials. In a simplified passive model with synaptic inputs containing AMPA and NMDA components, we found that increasing the AMPA decay time constant or increasing the AMPA/NMDA ratio shifts the inputoutput relation from thresholded to sigmoidal. In the full model, increasing the density of Na+ channels changes signal propagation in dendrites from passive to active, increases the slope of the somatic fast component, and shifts the summation to more supralinear, but at high values it creates oscillations in the dendritic voltage. Decreasing the density of A-type potassium channels also increases somatic amplitudes and supralinearity. Passive parameters of the model also affect integration. Reducing the axial resistance increases somatic EPSP amplitude and slope, and shifts the input-output relation from thresholded to more sigmoidal. Decreasing the membrane capacitance increases the slope of the EPSP, but does not affect the input-output relation for the amplitude. In conclusion, matching the experimentally observed somatic response to synaptic input in detailed models provides valuable constraints on parameters which fundamentally shape dendritic integration. Acknowledgements We thank Balázs Ujfalussy and Norbert Majubu for useful discussions. Supported by the EU FP7 grant no. 604102 (Human Brain Project) and ERC-2011-ADG-294313 (SERRACO).

69

Posters Tuesday

[T 40]

Modeling a single tripartite synapse

Eero Räisänen1 , Jari A K Hyttinen1 , Kerstin Lenk1 1. Department of Electronics and Communications Engineering, Tampere University of Technology, BioMediTech, Finn-Medi 1 L 4, Biokatu 6, Tampere, Finland doi: 10.12751/nncn.bc2015.0057

Astrocytes have gained an increased interest in neuroscience due to their ability to influence synaptic transmission through gliotransmitters. Many studies and models concentrate on tripartite synapses formed by two neurons and an astrocyte (De Pittá et al., PLoS Comput. Biol. 2011). In the presented work we concentrated on the pathway from the presynapse to the astrocyte and back to the presynapse. A version of Tsodyks-Markram presynaptic model is used as described by De Pittá et al. and astrocytic effects as described in the same paper. These effects are tested in a similar manner to when the simulator is combined to spiking neuronal network INEX by Lenk (LNCS 2011). At an event of spike amount U (=W) calcium enters the presynaptic terminal and binds to vesicle sensors u. There is an amount of x neurotransmitter present in the presynapse at any given time. Amount of u*x resources are released (RR). Gliotransmitters affect the value U by modifying parameter alpha which describes the effects of presynaptic receptors to release probability. U is changed towards alpha depending on gliotransmitter amounts released by astrocyte. The detection of glutamate in the synaptic cleft triggers an IP3 increase in the astrocyte followed by a calcium release. When the calcium concentration reaches a certain threshold gliotransmitters are released from the astrocyte which are detected by the presynapse and the effects decay over time. The gliotransmitter levels together with the frequency of occurring spikes affect the release amount in the presynapse. We simulated the interaction between one excitatory presynapse and an astrocyte and applied three different spike frequencies from low to high and three different initial excitatory synaptic strengths W. The results show that steady state input of spikes can lead to periodic output of the synapse. The periodical output is dependent on the initial weight (W) of the synapse and frequency of spiking. Acknowledgements This research has been supported by the 3DNeuroN project in the European Union’s Seventh Framework Programme, Future and Emerging Technologies, grant agreement no296590. References 1 De Pittà, M., Volman, V., Berry, H., and Ben-Jacob, E. (2011). A tale of two stories: astrocyte regulation of synaptic depression and facilitation. PLoS Comput. Biol. 7:e1002293. 10.1371/journal.pcbi.1002293 2 Lenk, K. (2011). A simple phenomenological neuronal model with inhibitory and excitatory synapses. In Advances in Nonlinear Speech Processing (pp. 232-238). Springer Berlin Heidelberg.

70

[T 41] Time scales and statistics of fluctuations in recurrent spiking networks Stefan Wieland1,2 , Davide Bernardi1,2 , Benjamin Lindner1,2

Slow time scales in neural systems can arise through the interaction of fast elements. A hotly debated issue in computational neuroscience is how slow dynamics can emerge in recurrent spiking networks [1]. Due to the recurrent coupling, every output neuron of such a network is a potential input neuron. Therefore, input and output spike-train statistics must be self-consistent. We show how this self-consistency requirement can be used in iterative one-cell simulations to determine spike-train autocorrelations in large recurrent networks. For networks of perfect integrate-and-fire models, it enables us to predict analytically the existence of two possible network regimes characterized by the presence or absence of fluctuations on long timescales, respectively. We also investigate numerically the emergence of similar fluctuations in networks of biophysically more realistic neuron models. Acknowledgements This work was funded by the BMBF (FKZ: 01GQ1001A) and by the DFG (GRK1589/2). References 1 Ostojic (2014). Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nat Neurosci. 17: 594-600

[T 42] When function mirrors structure: how slow waves are shaped by cortical layers Cristiano Capone1,2 , Beatriz Rebollo3 , Alberto Munoz Cespedes4 , Paolo Del Giudice1 , Maria V. Sanchez-Vives3,5 , Maurizio Mattia1 1. TESA, Italian Institute of Health, Rome, Italy 2. Physics, "La Sapienza" University, Rome, Italy 3. IDIBAPS, Barcelona, Spain 4. Universidad Complutense de Madrid, Madrid, Spain 5. ICREA, Barcelona, Spain doi: 10.12751/nncn.bc2015.0059

Spontaneous activity provides valuable information on neuronal network organization. Here we aimed at relating slow-wave activity in cortical slices with their laminar structure. Multi-unit activities from multisite recordings alternated between Up and Down states, and displayed a distribution of time lags between state transitions due to an activity wave propagation. We found different propagation modes in terms of velocity, direction and wavefront shape (Fig. 1A). Despite such variability, we consistently found that the head of the single wavefronts systematically occurred within a narrow strip almost parallel to the cortical surface (Fig. 1B), possibly corresponding to the most excitable part of the network. To investigate this hypothesis, we simulated such activity propagation in a slice model composed of oscillating cortical modules of spiking neurons arranged in a 2D lattice with nearest neighbor connectivity. We set a non-monotonic gradient of connectivity level across the cortical depth with a strip of overexcited modules, aiming at embodying 71

.

1. Bernstein Center for Computational Neuroscience, Berlin, Berlin, Germany 2. Department of Physics, Humboldt University - Berlin, Germany doi: 10.12751/nncn.bc2015.0058

Posters Tuesday the different layer excitability. An increase in the connectivity did not mirror a simple increase of the speed of the activity waves. Rather, this parameter had to be chosen in an optimal range in order to reproduce the same experimental propagating patterns. A reason for this optimal connectivity level was found within the neural field theory framework, as propagation speed is i) directly related to the Down state stability set by the local connectivity, and ii) inversely related with the distance from the activation threshold dependent on long range connectivity. From the model prediction, we expected to measure maximum firing rates and longest Up state durations within the same excitable strip leading the activity propagation. In experiments we found this expected overlap (Fig. 1B), and relying on histological identification the excitable strips reliably superimposed on cortical layers 4 and 5 (Fig. 1C).

A.Wavefronts for 2 modes of propagation. B.Average strips where wavefronts propagate earlier (black), and where Up states have maximum duration (green) and magnitude (blue). C.Example match between strip of early wave propagation and slice’s layer. Acknowledgements Supported by EU CORTICONIC contract 600806 and MINECO BFU2011- 27094.

[T 43]

Learning universal computations with spikes

Dominik Thalmeier1 , Marvin Uhlmann2,3 , Hilbert J. Kappen1 , Raoul-Martin Memmesheimer2,4 1. Department of Biophysics, Donders Institute, Radboud University, Nijmegen, Netherlands 2. Department of Neuroinformatics, Donders Institute, Radboud University, Nijmegen, Netherlands 3. Department for Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands 4. Center for Theoretical Neuroscience, Columbia University, New York, USA doi: 10.12751/nncn.bc2015.0060

Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. The figure illustrates learning of self-sustained dynamical patterns of different complexity. (a) Sine wave generated by summed, synaptically filtered network output spike trains 72

.

after learning. (b) Sample of the network’s spike trains generating the sine in (b). (c) Recalled more complicated, camel’s hump-like pattern. (d-f) Learning of complicated chaotic dynamics (Lorenz system). (d) The spiking network approximates a trajectory of the Lorenz system during training (blue) and testing (red). (e) Detailed view of (d) highlighting how the teacher trajectory (yellow) is imitated during training and continued during testing. (f) The spiking network approximates not explicitly trained quantitative dynamical measures, like the tent map between subsequent maxima of the z-coordinate. The ideal tent map (yellow) is closely approximated by the tent map generated by the network output (red). For long test trials occasional errors can occur (outlier(s) in (d),(f)).

Learning and recall of self-sustained activity with spiking neural networks.

[T 44]

Variability Dynamics in Computing Reservoirs

Thomas Rost1,2 , Martin Paul Nawrot1,2 1. Computational Systems Neuroscience, Institute for Zoology, Department of Biology, University of Cologne, Germany 2. BCCN Berlin, Germany doi: 10.12751/nncn.bc2015.0061

Cortical spike trains are highly variable on various time scales. [1] showed that Poissonlike firing statistics can be obtained in large networks of randomly connected deterministic units if the inhibitory and excitatory inputs are balanced and spiking is fluctuation-driven. In vivo however, Fano factors above unity are found, which are reduced by stimulus presentation or movement execution [2,3,4]. Recently it has been shown that clusters of higher connection probabilities in balanced networks can lead to state switching in the cluster firing rates, thereby introducing a slower component of rate variance to the variability [5,6]. The resulting dynamics are reminiscent of the modulated variability in vivo and can even be learned in random networks through STDP [7]. [8] used self organizing recurrent neural networks (SORNs) of binary units as computing reservoirs and showed that a combination of learning mechanisms leads to improved performance in sequence learning tasks. In [9] the same authors showed that the resulting network dynamics can be interpreted as Bayesian inference and that a stimulus induced modulation of the trial-to-trial variability emerges from learning. In this work we explore the suitability of SORN-inspired networks for more general reservoir computing paradigms. We introduce some changes motivated by the balanced network literature and study the effects of the various parameters on the network dynamics. We show that the variability modulation described by [6] can be reproduced in our networks using predefined excitatory clusters. We then introduce spatio-temporal stimuli in a reservoir computing setup and examine the effect of learning on computational performance and variability dynamics. In particular, we ask whether the state switching between 73

Posters Tuesday attractors in the spontaneous activity which leads to an increased FF and its suppression through stimulation, which reduces the FF can also be achieved if input patterns are not spatially segregated. References 1 van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural computation, 10(6), 1321-71. 2 Churchland, M. M., Yu, B. M., Cunningham, J. P., Sugrue, L. P., Cohen, M. R., Corrado, G. S., Newsome, W. T., et al. (2010). Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature neuroscience, 13(3), 369-78. Nature Publi 3 Nawrot, M. P., Boucsein, C., Rodriguez Molina, V., Riehle, A., Aertsen, A., & Rotter, S. (2008). Measurement of variability dynamics in cortical spike trains. Journal of neuroscience methods, 169(2), 374-90. doi:10.1016/j.jneumeth.2007.10.013 4 Nawrot, M. P. (2010). Analysis of Parallel Spike Trains. In S. Grün & S. Rotter (Eds.), (pp. 1-22). Boston, MA: Springer US. 5 Deco, G., & Hugues, E. (2012). Neural network mechanisms underlying stimulus driven variability reduction. PLoS computational biology, 8(3), e1002395. doi:10.1371/journal.pcbi.1002395 6 Litwin-Kumar, A., & Doiron, B. (2012). Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature neuroscience, 15(11), 1498-1505. Nature Publishing Group. 7 Litwin-Kumar, A., & Doiron, B. (2014). Formation and maintenance of neuronal assemblies through synaptic plasticity. Nature communications, 5(May), 5319. Nature Publishing Group. 8 Lazar, A., Pipa, G., & Triesch, J. (2009). SORN: a self-organizing recurrent neural network. Frontiers in computational neuroscience, 3(October), 23. 9 Hartmann, C., Lazar, A., & Triesch, J. (2014). Where’s the noise? Key features of neuronal variability and inference emerge from self-organized learning. BioRxiv, 1-29.

[T 45] Autowaves of spiking activity synchronization in a model neuronal network with relaxational synaptic plasticity Dmitry Zendrikov1,2 , Alexander Paraskevov2 1. Moscow Institute of Physics and Technology (State University), 9 Institutskiy per., Dolgoprudny, 141700, Moscow Region, Russian Federation 2. National Research Centre "Kurchatov Institute", 1 Kurchatov sq., 123182, Moscow, Russian Federation doi: 10.12751/nncn.bc2015.0062

There exists a short-term (∼100 ms), repetitive, spontaneous synchronization of network spiking activity in planar neuronal networks grown in vitro from initially dissociated cortical or hippocampal neurons [1]. Such a phenomenon is called a population burst (PB) or network spike. It was indicated experimentally [2] that a PB might propagate in the neuronal network as a traveling wave, diverging from some occasional center. We have generalized the results of Ref. [3], where the PBs occurred in a model neuronal network of leaky integrate-and-fire (LIF) neurons with a short-term relaxational synaptic plasticity, to the case of a spatially-dependent network topology where the probability of a connection between neurons depends on their mutual arrangement [4]. In particular, we show that a typical PB has complex spatial dynamics with a few occasional local sources of spiking synchronization, from which it propagates in the planar network as traveling waves (Fig. 1), analogous to the divergent circular waves on the surface of water resulting from its local perturbation. This is in qualitative agreement with the results [2]. Fig. 1 caption: The emergence and spread of a population burst in the planar neuronal network consisting of 40 thousand excitatory LIF-neurons randomly distributed over the square unit area. Each neuron has on average 60 outgoing connections. Most of the simulation parameters correspond to the original model [3]. Left: (top) the average spiking activity of the network, (bottom) raster, i.e., the instantaneous activity 74

of individual neurons in the network. Right: Sequential spatial snapshots of the network spiking activity. Blue dots depict neurons and red dots highlight spiking neurons.

[T 46] Multiscale dynamics explains complex activity patterns observed in dissociated neural cultures Pavel Esir1,2 , Alexander Simonov1,2 1. Dept. of Theory of Oscillations and Automatic Control, N.I.Lobachevsky State University of Nizhny Novgorod, 23 Gagarin ave., Nizhny Novgorod, Russia 2. Neuroscience Center of Institute of Biology and Biomedicine, N.I.Lobachevsky State University of Nizhny Novgorod, 23 Gagarin ave., Nizhny Novgorod, Russia doi: 10.12751/nncn.bc2015.0063

Population bursting is an ubiquitous pattern of neural network activity observed under various experimental conditions. Particularly, its network nature is explored and wellstudied in in vitro dissociated cultures [1] and mathematical models of recurrent spiking networks (e.g. [2]). However, a plethora of much more complex activity patterns, also called superbursts, remains unexplained and simple mathematical models fail to reproduce them. Some types of superbursts are considered as seizure-like activity and associated with impairment of excitation and inhibition balance. In this work we use spiking neural network and rate-based modelling approaches to reveal dynamical mechanisms of this complex patters. Using our models we investigate how interplay between biophysical processes with different time scales shape network dynamics underlying generation of complex activity patterns. We analyse the key factors of multiscale dynamics and investigate role of NMDA currents, slow spike-frequency adaptation, bicarbonate-dependent depolarizing potentials and short-term depression. As a result, the developed models are fine-tuned to reproduce experimentally observed activity patterns. An example of superburst is shown in Figure 1. Firing rates of excitatory and inhibitory populations are represented by ve and vi variables. Slowly decaying NMDA currents (variable uee ) entrained by highly synchronized bursting, subsequently reactivate whole network discharges with increasing frequency. After each of these discharges the activity quickly drops down due to short-term depression of excitatory synapses (variable xee ). Long persistent spiking triggers increasing slow spike-frequency adaptation (variables Ge and Gi ) which eventually leads to the end of the superberst. 75

.

The detailed figure caption is placed at the end of the abstract. References 1 D. Eytan and S. Marom. Dynamics and effective topology underlying synchronization in networks of cortical neurons. J. Neurosci. 26, 8465-8476 (2006) 2 E. Maeda, H.P. Robinson, A. Kawana. The mechanisms of generation and propagation of synchronized bursting in developing networks of cortical neurons. J. Neurosci. 15, 6834-6845 (1995) 3 M. Tsodyks, A. Uziel and H. Markram. Synchrony generation in recurrent networks with frequencydependent synapses. J. Neurosci. 20, RC50 (2000) 4 R.D. Traub and R. Miles. Neuronal Networks of the Hippocampus. Cambridge University Press, 1991

Posters Tuesday

Superburst activity in the rate-based model with multiple time scales. Acknowledgements The work was supported by the Russian Science Foundation (proj. No.14-11-00693), by the Ministry of Education and Science of Russia (proj. Nos.14.581.21.0011, 14.578.21.0074 and 14.578.21.0094) References 1 Pimashkin, A., Kastalskiy, I., Simonov, A., Koryagina, E., Mukhina, I., & Kazantsev, V. (2011). Spiking signatures of spontaneous activity bursts in hippocampal cultures. Frontiers in Computational Neuroscience, 5, 46. http://dx.doi.org/10.3389/fncom.2011.00046 2 Simonov, A. Y., & Kazantsev, V. B. (2011). Model of the appearance of avalanche bioelectric discharges in neural networks of the brain. JETP Letters, 93(8), 470–475. http://doi.org/10.1134/S0021364011080133

[T 47] High-resolution 3D tracking for the quantitative analysis of exploratory behavior Justin Joseph Graboski1 , Eduardo Blanco-Hernández1 , Anton Sirota1 1. Department Biology II, Ludwig-Maximilians Universität, Großhaderner Straße 2 82152 Planegg-Martinsried, München, Germany doi: 10.12751/nncn.bc2015.0064

During free exploration organisms display complex sequences of behavioral patterns that evolved to gather information from the surroundings, which could be critical for survival. The coordination of such complex patterns is performed by the nervous system, and in particular the brain. Although accurate descriptions of behavior has been made by ethologists, the time consuming and lack of moment by moment resolution of categorization has discouraged the use of this descriptions during experiments. Advances in technology now have the potential to translate the complete sets of ethological descriptions into a quantitative framework within the laboratory settings. This approach has a premise, that objective and quantitative description of behavior will lead us to a better understanding of brain activity and function. Here we present a methodology that allows us to quantify and characterize the behavior of rats during free exploratory behavior. We use marker-based motion capture system to track reflective markers attached to the head and body of the rats, and reconstruct the 3d time series of each marker with high spatio-temporal resolution. This offers a low dimensional representation of animal posture and kinematics allowing the quantitative decomposition of behavior. We show that the general motion of the head and body are generally independent. Fine timescale analysis shows that initiation of motion begins with the head followed by the body. We define sets of empirical features to decompose the behavior in basic “behavioral states” such as rearing, walking with head high, walking with head low, grooming, and stay-in-place. And use them for training unsupervised classifiers of animal 76

behavior. We identify distinct modes of exploration associated with low walking and rhythmic head motion, more over this exploratory state is highly coherent with sniffing. Finally we show preliminary analysis on the decomposition of behavior during standard exploratory task.

.

References 1 Benjamini, Y. et al. Ten ways to improve the quality of descriptions of whole-animal movement. Neurosci. Biobehav. Rev. 34, 1351–65 (2010). 2 Casarrubea, M. et al. Temporal structure of the rat’s behavior in elevated plus maze test. Behav. Brain Res. 237, 290–299 (2013). 3 Gibson, E. J. Exploratory behavior in the development of perceiving, acting, and the acquiring of knowledge. Annu. Rev. Psychol. 39, 1–41 (1988). 4 Gomez-Marin, A., Paton, J. J., Kampff, A. R., Costa, R. M. & Mainen, Z. F. Big behavioral data: psychology, ethology and the foundations of neuroscience. Nat. Neurosci. 17, 1455–1462 (2014).

[T 48] Frequency Filtering of Deep and Superficial Input to the Martinotti Loop Richard Naud1 , Henning Sprekeler1 1. Bernstein Center for Computational Neuroscience Berlin, Technische Universität Berlin, MarchStr. 23, Germany doi: 10.12751/nncn.bc2015.0065

Pyramidal neurons often connect to inhibitory cells that project back to the dendrites the pyramidal neurons. This recurring microcircuit, the Martinotti loop, is anatomically poised to combine the input of the deeper cortical layers with what impinges on the superficial layers. We have constructed a computational model that captures salient electrophysiological features of this system. To capture nonlinear dendritic integration in the dendritic tuft of pyramidal cells [1], we simplified a two-compartment model previously used [2] to predict spike timing responses to complex current injections. We then incorportaed the pyramidal neuron model in a network where the excitatory synapse show short-term facilitation and the inhibitory synapses target dendritic compartments. The model reproduces qualitatively the saturating frequency-dependent disynaptic inhibition observed in slices [3]. We then analyzed the input-output properties of the Martinotti loop, imbedded in a balanced-state network. Our simulations show a depth-dependent frequency filtering of information, which requires dendritic activity and short-term plasticity. We contrast our results with signatures of feedback and feedforward cortical processing in vivo [4]. Acknowledgements This work was supported by the German ministry for Science and Education through a Bernstein Award to H.S. (grant no. 01GQ1201). References 1 Larkum M, Zhu J, Sakmann B. A new cellular mechanism for coupling inputs arriving at different cortical layers. Nature. 1999;398:338–341. 2 Naud R, Bathellier B, Gerstner W. Spike-timing prediction in cortical neurons with active dendrites. Frontiers in computational neuroscience. 2014;8. 3 Berger TK, Silberberg G, Perin R, Markram H. Brief bursts self-inhibit and correlate the pyramidal network. PLoS biology. 2010;8(9):e1000473. 4 van Kerkoerle T, Self MW, Dagnino B, Gariel-Mathis MA, Poort J, van der Togt C, et al. Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex. Proceedings of the National Academy of Sciences. 2014;111(40)

77

Posters Tuesday

[T 49] A comparison of deterministic and stochastic ion channel representations in a model of a cerebellar nucleus neuron Maria Psarrou1,2 , Maria Schilstra1,2 , Neil Davey1,2 , Volker Steuber1,2 1. School of Computer Science, University of Hertfordshire, College Lane, Hatfield, Herts AL10 9AB, United Kingdom 2. Science and Technology Research Institute, College Lane, Hatfield, Herts AL10 9AB, United Kingdom doi: 10.12751/nncn.bc2015.0066

Ion channels can either be modelled at a macroscopic level, using a deterministic representation such as the Hodgkin-Huxley formalism, or at a more detailed singlechannel level, where their stochastic nature is taken into account by using a Markov formalism. The Hodgkin-Huxley model describes the combined collective effect of the channel population on the membrane potential, but it does not provide a comprehensive kinetic diagram. As a result, various aspects of the behaviour and consequently the functional role of individual channels can be overlooked. On the other hand, a more accurate alternative channel formalism is the Markov model. In Markov models, a single channel is represented by a kinetic scheme comprising a finite set of discrete intermediate states with probabilistic transitions from one state to another. Channel noise, introduced by the stochastic gating of the ion channels, can affect the generation and timing of action potentials and therefore potentially also single neuron computations. In the present study, the voltage-gated channels of a morphologically realistic conductance based cerebellar nucleus (CN) neuron model were expressed as Markov formalisms and their behaviour was compared with their deterministic Hodgkin-Huxley type counterparts. Our results show that the majority of the deterministic CN channel models could easily be replaced by stochastic versions, without affecting neuronal behaviour. However, this was not the case for the fast sodium channel, where the parameter changes that had to be introduced in order to match the activity of the stochastic and deterministic models depended on the level of activation of the neuron, even for very small single channel conductances in the stochastic model. We are currently studying the implications of stochastic ion channel gating on computations performed by CN neurons.

[T 50]

Signal propagation in small reproducible neuronal networks

László Demkó1 , Doris Ling1 , Serge Weydert1 , Csaba Forró1 , János Vörös1 1. Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, ETH Zurich, CH-8092, Switzerland doi: 10.12751/nncn.bc2015.0067

Understanding how the human brain stores and processes information is undoubtedly one of the grand challenges of this century. Despite of the vast amount of technical possibilities we still have very little understanding (and especially consensus) about e.g. learning, which might be partially due to the lack of well-defined, small “study networks” of real neurons that can be reproducibly and quantitatively analyzed at a few or single neuron level over extended periods of time. Apart from the experimental approach, modeling has always played a key role in understanding neural coding and signal transmission, however, model based predictions for large networks are still far from reality. In order to gain insight into the information processing and neurocomputation 78

Acknowledgements This research has been supported by the 3DNeuroN project in the European Union’s Seventh Framework Programme, Future and Emerging Technologies, grant agreement n°296590.

[T 51] Input spike trains reduce dynamical entropy production in balanced networks Rainer Engelken1,2 , Fred Wolf1,2 1. Theoretical Neurophysics, MPI for Dynamics and Self-Organization, Goettingen, Germany 2. BCCN, Goettingen doi: 10.12751/nncn.bc2015.0068

A longstanding hypothesis claims that structured input in neural circuits enhances reliability of spiking responses. While studies in single neurons well support this hypothesis [Mainen 1995], the impact of input structure on the dynamics of recurrent networks is not well understood. The high dimensional and often chaotic dynamics of large recurrent networks requires a type of analysis that can systematically asses the role of recurrent interactions and characterize the networks collective dynamics. Studies in rate chaotic networks suggest a suppression of chaos by structured input [Molgedey 1992], but in spiking input, this has not yet been thoroughly analyzed. To address this challenge, we here describe how the analysis of dynamical stability and entropy production can be generalized for examining balanced networks driven by streams of input spike trains. Previous studies of the dynamic stability of the balanced state used a constant external input [v.Vreeswijk 1996; Monteforte 2010] or white noise [Lajoie 2013, 2014]. An analytical expression for the Jacobian enables us to calculate the full Lyapunov spectrum. We solved the dynamics in numerically exact event-based simulations and calculated Lyapunov spectra, entropy production rate and attractor dimension. We examined the transition from constant to structured input in various scenarios, varying the input spike rate and/or coupling strength, while keeping the firing rate of the target population fixed. In general, we find a suppression of chaos by input spike trains. This finding holds both for variations of input rate and coupling strength and is robust to deviations from Poisson statistics. We also find that both independent bursty input spike trains and correlated input more strongly reduce chaos in spiking networks. Our work extends studies of chaotic rate models to spiking neuron models and 79

.

of small networks, we build small neuronal systems in a reproducible way to study and model how the spontaneous and stimulated activity arises and evolves over time, and how it changes for different stimulation patterns. By the characterization of signal propagation in different cultures we look for repeating spatiotemporal patterns and directional connections, trying to map the possible correlations between the two. As for the theoretical aspect, we attempt to make one step above the classical models that operate with different synaptic strengths and transfer probabilities, and construct a simplified model based on our observations of rather deterministic firing patterns in the above-mentioned small networks. The model assumes that the flow of information (activity) between groups of neurons can be described by delay times representing the time needed for the information traveling from one group of neurons to the other. This delay time is a property easy to measure by common techniques such as patch clamping or multi-electrode arrays as part of the characterization step, providing a genuine way to bridge the gap between experiments and theory.

Posters Tuesday opens a novel avenue to investigate the role of sensory streams in shaping the dynamics of large network.

Poisson input spike train suppress chaos in balanced circuits. Each theta neurons in a random balanced network receives an independent spike train from a Poisson process with weak synaptic coupling. Acknowledgements FW was supported by DFG through CRC 889 and Volkswagen Foundation. RE was supported by Evangelisches Studienwerk e.V. References 1 Mainen, Zachary F., and Terrence J. Sejnowski. Science 268.5216 (1995). 2 Molgedey, L.; Schuchhardt, J.; Schuster,H.G. Phys. Rev. Let 69, (1992). 3 van Vreeswijk, C. & Sompolinsky, Science 274, (1996). 4 Monteforte, M. & Wolf, F. Physical Review Letters 105, (2010). 5 Monteforte, M. & Wolf, F. PRX (2014) 6 Lajoie, G., Lin, K.K. & Shea-Brown, E., Physical Review E, 87(5), (2013) 7 Wolf, F., Engelken, R., Puelma-Touzel, M., Weidinger, J. D. F., & Neef, A. (2014). Current opinion in neurobiology, 25, 228-236. 10.1016/j.conb.2014.01.017 8 Lajoie, G., Thivierge, J.-P. & Shea-Brown, E., Frontiers in Comp. Neuroscience (2014).

[T 52] Impact of delayed interactions on the dynamical properties of spiking networks Agostina Palmigiano1 , Michael Monteforte, Fred Wolf1 1. Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Am Fassberg 17, Germany doi: 10.12751/nncn.bc2015.0069

Inter-neuronal transmission delays are a fundamental feature of neuronal networks and have a direct impact on the characteristics of their collective dynamics. Delayed inhibition is responsible for the emergence of the neural rhythms [1] associated with diverse cognitive functions [2], and plays a main role in defining the frequency of these oscillations [3]; frequencies thought to act as separate channels of neural information transmission [4]. Spiking neural networks with synaptic delays can be modeled as coupled differential equations in an infinite dimensional space, for which an initial history function needs to be specified. The problem can, however, be reduced to be of finite, although varying dimension, in the case of delta coupling between units [5]. The evident complexity that such systems present, challenges its analytical tractability and therefore the study of its dynamics. In this work we make progress in the treatment of such networks and in analyzing their fundamental properties for finite and fixed 80

References 1 Buzsaki, G., Wang, X.-J. Mechanisms of gamma oscillations, Annual Review of Neuroscience 35, 20325, 2012 2 Buzsaki, G. Rhythms of the Brain, Oxford University Press, 2006 3 Brunel, N., Wang, X.-J. What determines the frequency of fast network oscillations with irregular neural discharges? Journal of Neurophysiology, 90(1), 41530, 2003 4 Colgin, L. L., Denninger, T., Fyhn, M., Hafting, T., Bonnevie, T., Jensen, O., ... , Moser, E. I. Frequency of gamma oscillations routes flow of information in the hippocampus, Nature, 462(7271), 353-357, 2009 5 Ashwin, P., Timme, M. Unstable attractors: existence and robustness in networks of oscillators with delayed pulse coupling, Nonlinearity, 18(5), 20352060, 2005

[T 53] Concurrent information processing in the neural ensembles: temporal and topological characteristics Margarita Zaleshina1 , Alexander Zaleshin2 1. UNPK, Moscow Institute of Physics and Technology, 1A Kerchenskaya St., Moscow, Russia 2. Conditioned Reflexes and Physiology of Emotion Lab, Institute of Higher Nervous Activity and Neurophysiology, 5A Butlerova St., Moscow, Russia doi: 10.12751/nncn.bc2015.0070

Introduction: Information received in separate areas of the brain from a variety of sources, initially has independent processing. This is accompanied by the concurrent activation of particular neuronal populations. Using maps of functional connectivity allows analyzing temporal and topological characteristics of separate neural ensembles. In the analysis of neural activity should be considered the following: The features of coding information from different sensory organs; The presence or absence of direct anatomical connections between neural ensembles; The spatial arrangement of neuronal assemblies relative to each other. Methods: Data EEG and fMRI, received in response to together incoming visual and auditory stimuli, are used as experimental bases. Visualization of experiment data allows to construct neuroimaging with reference to topology of the investigated brain. During the rendering process, discrete observations expand up to analytic continuations with using of the neuroreconstruction methods. Independent component analysis allows to divide the layers observed activity. For spatial representations of the brain areas are used special geoinformation system applications. Calculate the model of the dynamics of signal propagation. Results: Spatial analysis of the layers reveals the features of a functional interaction of neural ensembles in the concurrent processing of signals from different sources. It is noted that the functional interaction and synchronization of the time is typical as well for an anatomically connected, as for independent neural 81

.

degrees of freedom. We derive an expression for the single spike Jacobian of the delayed neural network, circumventing the dependence on its history by introducing, for every neuron, a postsynaptic single-compartment-axon (SCA) modeled as a sub-threshold unit. Synaptic interaction therefore remains instantaneous, but effective transmission delays are introduced by the additional steps of input integration; the delay is thus the finite time that is required by the delayer SCA to ”spike”. This novel procedure allows to numerically obtain the full Lyapunov spectrum and its derived quantities such as the attractor dimension and the entropy production rate. We provide a framework to study the impact of heterogeneous delayed interactions on the relation between two crucial aspects: The emergence of network oscillations and their properties on the one hand, and the stability and the possible limitations for information storage in such networks on the other.

Posters Tuesday ensembles. Conclusion: Concurrent processing in the brain is observed as a complex dynamic activities of neuronal ensembles. This research allows to describe the topological and temporal characteristics of the individual ensembles and build a map of the dynamic activity of the complex neural network.

Neural coding [T 54] Inter-spike intervals reveal functionally distinct cell populations in the medial entorhinal cortex Patrick Latuske1 , Oana Toader1 , Kevin Allen1 1. Department of Clinical Neurobiology, Medical Faculty of Heidelberg University and German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany doi: 10.12751/nncn.bc2015.0071

The superficial layers of the medial entorhinal cortex (MEC) contain spatially selective neurons that are crucial for spatial navigation and memory (Hafting et al., 2005). These highly specialized neurons include grid cells, border cells, head direction cells and irregular spatially selective cells (Hafting et al., 2005; Sargolini et al., 2006). In addition, MEC neurons display a large variability in their spike patterns at a millisecond time scale. In this study, we analyzed spike trains of neurons in the MEC superficial layers and found that these cells can be classified in two groups based on their propensity to fire spike doublets at 125-250 Hz. The two groups, labeled ”bursty” and ”non-bursty” neurons, differed in their spike waveforms and inter-spike interval adaptation but displayed a similar mean firing rate. Grid cell spatial periodicity was more commonly observed in bursty than in non-bursty neurons. In contrast, most neurons with head direction selectivity or that were firing at the border of the environment were non-bursty neurons. During theta oscillations, both bursty and non-bursty neurons fired preferentially near the end of the descending phase of the cycle, but the spikes of bursty neurons occurred at an earlier phase than those of non-bursty neurons. Finally, analysis of spike-time crosscorrelations between simultaneously recorded neurons suggested that the two cell classes are differentially coupled to fast spiking interneurons: bursty neurons were twice as likely to have excitatory interactions with putative interneurons as non-bursty neurons. These results demonstrate that bursty and non-bursty neurons are differentially integrated in the MEC network and preferentially encode distinct spatial signals. Acknowledgements This work was supported by an Emmy Noether Program grant (AL 1730/1-1) to KA and a Collaborative Research Centre (SFB-1134) from the DFG. References 1 Hafting et al., 2005 10.1038/nature03721 2 Sargolini et al., 2006 10.1126/science.1125572

82

[T 55] Characterization of the activity of head-direction cells, place cells and grid cells during navigation in darkness José Antonio Pérez Escobar1,2 , Laura Kohler1,2 , Kevin Allen1,2

Path integration refers to the ability to navigate in an environment using self-motion cues derived from the vestibular and proprioceptive systems (Mittelstaedt and Mittelstaedt, 1980). This form of navigation depends on a nearly continuous estimation of the distance and direction of movement, and is especially important when reliable external landmarks are absent. The directional component of path integration likely depends on the activity of head-direction cells (Taube, 2007), whereas the estimation of position could be obtained from the firing of grid cells or place cells (Moser et al., 2008). Here we developed a protocol to investigate the spatial representations generated by these three cell types during path integration. We performed in vivo multiple single-unit recordings of head-direction cells from the anterior dorsal thalamic nucleus, place cells from the CA1 region, and grid cells from the medial entorhinal cortex. Mice were trained to run on an elevated circular platform for 30 2-min trials. In half of the trials, one light source located at the periphery of the platform was turned on and acted as a polarizing cue. The remaining trials were performed in darkness. Our preliminary results showed that the spatial selectivity of the three cell types decreased in darkness. We found that the activity of small groups of head-direction cells could be used to monitor error in the head-direction system during path integration. Consistent with the hypothesis that error accumulates during navigation in darkness, the orientation of the head-direction network progressively drifted away from its original anchoring during dark trials. In contrast to head direction cells, which conserve their firing rates in all conditions, place cells showed pronounced changes in firing rates between light and dark trials. Further analysis on the firing patterns of place cells and grid cells will be presented. Acknowledgements This work was supported by an Emmy Noether Program grant (AL 1730/1-1) to KA and a Collaborative Research Centre (SFB-1134) from the DFG. The authors declare no competing financial interests. References 1 Mittelstaedt, M. L., & Mittelstaedt, H. (1980). Homing by path integration in a mammal. Naturwissenschaften, 67(11), 566-567. 2 Moser, E. I., Kropff, E., & Moser, M. B. (2008). Place cells, grid cells, and the brain’s spatial representation system. Annu. Rev. Neurosci., 31, 69-89. 3 Taube, J. S. (2007). The head direction signal: origins and sensory-motor integration. Annu. Rev. Neurosci., 30, 181-207.

83

.

1. Department of Clinical Neurobiology, Medical Faculty of Heidelberg University, Germany 2. German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Germany doi: 10.12751/nncn.bc2015.0072

Posters Tuesday

[T 56] Use of the instantaneous time constant to characterize windows of optimal synaptic integration in high frequency oscillations Antonio Yanez1 , Timm Hondrich1 , Andreas Draguhn1,2 , Martin Both1 1. Institute of Physiology and Pathophysiology, Heidelberg University, 69120 Heidelberg, Germany 2. Bernstein Center for Computational Neuroscience (BCCN) Heidelberg/Mannheim, 69120 Heidelberg, Germany doi: 10.12751/nncn.bc2015.0073

Hippocampal spontaneous activity presents high frequency oscillations (˜200 Hz) denominated ripples. During ripples, CA1 pyramidal cells receive highly synchronized excitatory input, activating AMPA synapses. Their kinetics present a fast decaying time constant (˜2 ms), as has been well investigated in voltage clamp studies. Nevertheless, the effect the opening of the synaptic receptors plays on signal integration is not yet fully described. Indeed, the opening of the AMPA receptors yields an overal conductance increase, which in turn decreases the time constant (τ = c/g) that remains as long as the receptors are open. The time constant sets the time scale at which the membrane potential changes. Hence, a reduced time constant leads to a faster, enhanced signal integration. We propose a method to record these conductance changes without the need of clamping the neuron to a fixed voltage. The core idea is to inject a periodic current into the neuron, and provide the same defined synaptic stimulus repetitively, while systematically changing the phase of the periodic current injection for each trial. From the same time points of different trials, a value for the “instantaneous” time constant can be extracted, which is proportional to the total cell conductance. Moreover, it is also proportional to the post synaptic currents (PSCs) measured in voltage clamp configuration, with the advantage that it allows to work in the dynamic, more physiological current clamp. Computational simulations show that for two synaptic AMPA-like inputs, as long as there is some overlap of the conductance changes, the maximum of the integrated EPSP is constant. Furthermore, application of this method to CA1 pyramidal cells in acute slices subject to Schaffer collateral stimulation can qualitatively reflect the conductance changes. In summary, our method allows to study the variation in conductance of synaptic activity using the more physiological current clamp configuration. By this method we can see the existence of a window of optimal integration of AMPA synapses in the ripple frequency range. Acknowledgements A.Y. was supported by "la Caixa" and DAAD

[T 57]

Detecting cell assemblies in multiple single unit recordings

Eleonora Russo1,2 , Daniel Durstewitz1,2 1. Department of Theoretical Neuroscience, ZI - Central Institute of Mental Health, J5, 68159, Mannheim, Germany 2. Bernstein-Center for Computational Neuroscience, Heidelberg-Mannheim, Germany doi: 10.12751/nncn.bc2015.0074

Since Hebb’s (1949) original proposal, the idea that neurons functionally organize into assemblies for representational and computational purposes has been highly influential in theoretical and experimental neuroscience. The assembly concept has undergone many different interpretations with regards to the exact nature of neural activity organization. 84

Acknowledgements This work was funded through a grant from the German Ministry of Education and Research (BMBF, 01GQ1003B) and from the German Science Foundation (Du 354/8-1, SFB 1134).

[T 58] Spiking Neural Networks for Temporal Pattern Classification of Speech Alex Rowbottom, André Grüning, Brian Gardner doi: 10.12751/nncn.bc2015.0075

The process of how auditory neurons encode and learn to recognise speech signals is still an open question. Humans learn to differentiate between even very slight variations of tone, identifying words with ease, whilst current methods of speech recognition within machines either perform poorly in comparison or are biologically-implausible. Here we propose a biologically-inspired method for audio to spike encoding and the recognition of speech, using spiking neural networks. We implement a deterministic adaptation of Pfister’s learning rule [1] and the Tempotron learning rule [2] for training single-layer networks using the NEST simulator [3], and develop a method of audio to spike encoding based upon Fourier transforms. We achieve a good level of performance from the Tempotron rule with the classification of up to five different spoken digits, where we have a binary output neuron for each class. The Pfister rule also achieves good performance, with the ability to classify up to three different spoken digits with 90% accuracy using just one output neuron. We also experiment with the number of spikes in the target spike train for Pfister learning, determining the precise coding of two spikes to be optimal in this particular task. 85

.

Assemblies have been associated with exactly synchronized spiking times within a collection of neurons, with precise temporal sequences of spiking patterns, with temporally ordered patterns of firing rate or bursting activity, or much more generally just with conjoint increases of the firing rates within a neuron set. Thanks to recent advances in electrophysiological techniques for recording multiple single units (MSU) simultaneously, it is now possible to test and investigate this key concept experimentally. In our work we propose a novel, simple, and comparatively fast method for detecting cell assemblies from MSU recordings at various organization levels. From the occurrence frequencies of firing patterns we derive an F-distributed statistic which enables parametric significance testing. Compared to other detection methods which are often designed with a specific definition of cell assemblies in mind, our method is able to detect the presence of a large variety of different assembly concepts at different temporal resolutions. Moreover, we propose a novel approach to deal with non-stationarity in spiking time series. Our algorithm ultimately returns the identity of the units involved in cell assemblies, the assemblies’ activation times, and the temporal resolution that describes each assembly structure best.

Posters Tuesday

A graph to show the progression of the accuracy of the Pfister rule, with three classes and two target spikes over 500 training iterations. References 1 Pfister, J.-P., Toyoizumi, T., Barber, D., & Gerstner, W. (2006). Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Computation, 18(6), 1318-1348. 2 Gütig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing–based decisions. Nature Neuroscience, 9(3), 420-428. 3 Gewaltig, M.-O., & Diesmann, M. (2007). NEST (Neural Simulation Tool). Scholarpedia 2(4), 1430.

[T 59] A closer look at grid cells: Discharge statistics and firing-field shapes Johannes Nagele1 , Barış Özmen1,2,3 , Dora Csordas1,4 , Martin Stemmler1 , Andreas V.M. Herz1 1. Faculty of Biology, Ludwig-Maximilians-Universität München & Bernstein Center for Computational Neuroscience Munich, 82152 Martinsried, Germany 2. Faculty of Information Technology, Technische Universität München, 80333 Munich, Germany 3. Computer Engineering Department, Boğaziçi Üniversitesi, 34342 Bebek/İstanbul, Turkey 4. Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, 1083 Budapest, Hungary doi: 10.12751/nncn.bc2015.0076

Grid cells in the mammalian entorhinal cortex fire whenever the animal is close to the vertices of an imaginary hexagonal tessellating the explored space [1]. In many theoretical models of grid cells, the firing fields result from the superposition of three sinusoidal plane-waves that are rotated by π/3 with respect to each other [2]. Such models make testable predictions: (1) the size of the firing fields should scale in direct proportion to the distance between fields; (2) the firing rate within a field should be nearly independent of the angle with respect to the center of the field; (3) there should be no variations in the peak firing rate from field to field. Notably, attractor neural networks [3] result in the same predictions for (2) and (3). Depending on the assumed connective structure, they may differ regarding (1). In coding models, discharge patterns within a grid field are usually modelled as Poisson spike trains with a location-dependent firing rate [4], sometimes modulated by time-dependent processes to reflect theta-oscillations and phase-precession phenomena [2]. We reanalyze experimental grid cell data from the Moser lab [5,6] to test these predictions and reexamine the validity of standard assumptions about how spatial information is encoded. We focus on (a) the symmetry properties and radial profile of grid fields, (b) on the fine temporal details of the spike patterns and their spatial dependencies, and (c) discharge variability between multiple runs through the same firing field. Our findings provide constraints for future models for the dynamics and coding strategies of entorhinal grid cells. 86

Acknowledgements Supported by BMBF through the BCCN Munich (01GQ1004A).

.

References 1 Hafting T, Fyhn M, Molden S, Moser MB and Moser EI (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436, 1248–1252. 10.1038/nature03721 2 Burgess N, Barry C, and O´Keefe J (2007). An oscillatory interference model of grid cell firing. Hippocampus 17, 801-812. 10.1002/hipo.20327 3 Fuhs MC and Touretzky DS (2006). A spin glass model of path integration in rat medial entorhinal cortex. J. Neurosci. 26, 4266–4276. 10.1523/JNEUROSCI.4353-05.2006 4 Mathis A, Herz AVM and Stemmler MB. (2012). Optimal population codes for space: Grid cells outperform place cells. Neural Computation 24, 2280-2317. 10. 1162/NECO_a_00319 5 Sargolini F, Fyhn M, Hafting T, McNaughton BL, Witter MP, Moser MB and Moser EI (2006). Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science, 312(5774), 758–62. 10.1126/science.1125572 6 Stensola H, Stensola T, Solstad T, Frøland K, Moser MB and Moser EI (2012). The entorhinal grid map is discretized. Nature, 492(7427), 72–78. 10.1038/nature11649

[T 60] Analysis of Multiplexed Neural Codes Using the Laplacian Pyramid Decomposition Manuel Molano-Mazon1 , Arno Onken1 , Houman Safaai1 , Stefano Panzeri1 1. Laboratory of Neural Computation, Istituto Italiano di Tecnologia Rovereto, Corso Bettini 31,38068 Rovereto, Italy doi: 10.12751/nncn.bc2015.0077

Recent studies have pointed out that the neural code may use multiplexing to encode unique information at different temporal scales of spike trains [1]. However, how to mathematically separate out the different components of a neural code and to identify the unique contribution of each time scale to sensory coding and behaviour has remained an open challenge [2]. Here we investigated this problem by developing a novel approach to spike train analysis based on a popular image compression method, the Laplacian Pyramid Decomposition (LPD) [3]. This technique allows us to express the neuronal response in a basis that characterises different temporal scales (see figure). Using the LPD as a framework, we have analytically calculated the information contained in each temporal scale as follows: first, we have separated the information about the stimuli into two components: the information that each scale provides and the information that is redundant [4] across different temporal scales. In a second step, we have performed a short time-scale series expansion [5] of these two components in order to quantify the amount of information that one scale contains about another. We then showed that the first order approximation of this contamination is non-symmetric: it can be non-zero only from coarse to fine scales. Furthermore, when the stimuli do not elicit any fine pattern in the neural response, the first order components of the information contained in all scales are equal. Taking these results into account, and assuming a firing rate regime in which the first order component of the information dominates, we propose a method to attribute redundant information to a specific scale and hence obtain a well-interpretable separation of scale-specific information. Our method aims at separating information uniquely contained in each scale and thereby provides a promising analysis method to study in detail the advantages and limitations of a multiplexed neural code. 87

Posters Tuesday

Example in which the information about the stimulus is only carried by the coarse scale but it is passed to scale 2. The figure shows the different steps of the method: scales extraction, information calculation and contamination quantification Acknowledgements This work was supported by the European Commission (FP7-ICT-2011.9.11/284553, “SICODE” and FP7-ICT-2011.9.11/600954, “VISUALISE” and H2020-MSCA-IF-2014/659227, “STOMMAC”). References 1 Panzeri, S., Brunel, N., Logothetis N. K. and Kayser, C. (2010). Sensory neural codes using multiplexed temporal scales. Trends Neurosci. 33(3):111-120 10.1016/j.tins.2009.12.001 2 Zuo, Y., Safaai, H., Notaro, G., Mazzoni, A., Panzeri, S. and Diamond, M. E. (2015). Complementary contributions of spike timing and spike rate to perceptual decisions in rat S1 and S2 cortex. Curr Biol. 25(3): 357-363 10.1016/j.cub.2014.11.065 3 Burt, P. J. and Adelson, E. H. 1983. The Laplacian Pyramid as a compact image code. IEEE T Commun. 31(4): 532-540 10.1109/TCOM.1983.1095851 4 Gawne, T. J. and Richmond, B. J. (1993). How independent are the messages carried by adjacent inferior temporal cortical neurons? J Neurosci. 13(7):2758-2771 5 Panzeri, S. and Schultz, S. R. (2001). A unified approach to the study of temporal, correlational, and rate coding. Neural Comput. 13(6):1311-1349

[T 61] The rheobase curves of a single impulse of conductance for the Hodgkin-Huxley model of different excitability classes Alexander Paraskevov1 1. National Research Centre "Kurchatov Institute", 1 Kurchatov sq., Moscow, 123182, Russian Federation doi: 10.12751/nncn.bc2015.0078

We show that the rheobase curve, i.e. the dependence of the minimal amplitude of spike-triggering stimulus on its duration, can determine the neuron’s excitability class (in Hodgkin’s classification [1]), if the rising part of the stimulating impulse is smooth enough. In particular, for the point-like Hodgkin-Huxley model neuron stimulated by a single conductance impulse in the form of an alpha-function, the rheobase curves are obtained (Fig. 1). For the neurons of the first class (“integrators”), the rheobase curve is a monotonically decreasing, hyperbola-like function, whereas for the neurons of the second class (“resonators”) the rheobase curve has a local minimum. (Note that such a minimum is absent for the rectangular stimulating impulse. In this case, it is impossible to distinguish the neuronal excitability class by the rheobase curves.) This makes neurons of the second class react selectively to a comparatively weak stimulus. The theoretical prediction of the relationship between (a) the neuron’s rheobase to a single impulse of conductance/current and (b) the neuronal excitability class allows direct experimental verification by dynamic/current clamp. 88

.

The rheobase curves for the Hodgkin-Huxley model of the first (blue curve, [2]) and second (green curve, [3]) excitability classes. The position of the local minimum is robust against changes of the value of the reversal potential E_rev. Acknowledgements The author thanks Máté Lengyel and Ole Paulsen for helpful remarks. The work was partially funded by the Wellcome Trust (UK). References 1 A.L. Hodgkin. The local electric changes associated with repetitive action in a non-medullated axon. J. Physiol. 107, 165–181 (1948) 2 Sevgi Şengül, Robert Clewley, Richard Bertram, Joël Tabak. Determining the contributions of divisive and subtractive feedback in the Hodgkin-Huxley model. J. Comput. Neurosci. 37, 403-415 (2014) 3 C. Koch. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, 1999

[T 62] Scalable bundle design for large scale neuronal recordings in vivo William Wray1 , Mihaly Kollo1 , Romeo R. Racz1 , Nikolai Kiskin1 , Matthew R. Angle2 , Andreas T. Schaefer1,3 1. The Francis Crick Institute Mill Hill Laboratory, London, United Kingdom 2. Stanford University, Stanford, CA, United States 3. Neuroscience, Physiology & Pharmacology, University College London, United Kingdom doi: 10.12751/nncn.bc2015.0079

Neural coding is comprised of precise interactions between thousands to millions of neurons. New techniques are needed to measure the time sensitive interactions within entire neural networks to better understand the complex patterns underlying brain function. Extracellular recording is one of the oldest methods of measuring neural activity and can sample at a temporal resolution to resolve fast spiking neurons. If scaled to a sufficiently large number of simultaneous recorded neurons, this technique would be an excellent candidate for such large scale recording. To record from large numbers of single units, an ideal system would have non-intrusive electrode dimensions, low impedance at the electrode-electrolyte interface, low stray capacitance, and a high signal to noise ratio. We propose the combined use of insulated metal microwire bundles and the readout integrated circuits (ROICs) from high speed infrared cameras for large scale neuronal recording in vivo. Glass ensheathed microwires with customizable diameters for metal core (2-15 um) and the surrounding glass (10-40 um) were grouped into bundles of 10-10,000 individual wires. These bundle electrodes were contacted to the exposed 10 um pixels of the ROIC within a Cheetah 640-CL1700 camera containing more than 300,000 capacitive transimpedance amplifiers ( ξc . The significance of the empirical population coincidence count npop,emp is derived by comparison to ξc the distribution of counts derived from multiple realizations of a Compound Poisson Process (CPP) [4] which includes a given order of correlation. The order of correlation included in the CPP is defined by the amplitude distribution, and may be additionally constrained to a particular average pairwise correlation and particular firing rates of the neurons. The performance of the method is tested by application to artificial data of known ground truth, and shows a high degree of specificity and sensitivity, also for small sample sizes. This offers the application in a time resolved manner to observe time dependent changes of the correlation, even in single trial data. The application of the method to experimental data from monkey motor cortex recorded during a reach to grasp behavior [5] shows higher occurrence probability of higher-order synchrony during the delay period than during the movement period. Acknowledgements Partially financed by Helmholtz Portfolio Supercomputing and Modeling for the Human Brain (SMHB), Human Brain Project (HBP, EU grant 604102) and BrainScaleS (EU Grant 269912). References 1 Grün, Diesmann, Aertsen (2002a,b) Neural Comput, 14(1): 43-80; 81-19 DOI:10.1162/089976602753284455 2 Riehle, Grün, Diesmann, Aertsen (1997) Science 278: 1950-1953 DOI:10.1126/science.278.5345.1950 3 Kilavik, Roux, Ponce-Alvarez, Confais, Grün, Riehle (2009) J Neurosci, 29(40): 12653-12663 DOI:10.1523/JNEUROSCI.1554-09.2009 4 Staude, Rotter, Grün (2010) J Comput Neurosci 29(1-2):327–350 DOI 10.1007/s10827-009-0195-x 5 Riehle, Wirtssohn, Grün, Brochier (2013) Front Neural Circuits 7:48 DOI: 10.3389/fncir.2013.00048

138

[W 10] Cross-Validated Bayesian Model Selection for Methodological Control in fMRI Data Analysis Joram Soch1,2 , Carsten Allefeld1,3 , John-Dylan Haynes1,2,3,4,5,6

Introduction: Suboptimal models for scientific data can result in suboptimal explanations of empirical phenomena and false conclusions about experimental effects. In neuroimaging, model assessment is rarely performed for general linear models (GLMs) [1] used to analyze data from functional magnetic resonance imaging (fMRI) [2]. We present a principled approach termed cross-validated Bayesian model selection (cvBMS) that enables to decide at the population level which GLM best describes a given fMRI data set. Methods: In the GLM, fMRI data (y) are modelled as a linear combination (β) of experimental conditions (X) with normally distributed errors (ε): y = Xβ + ε, ε ∼ N(0, σ 2 V ). On the first level, we use the Bayesian log model evidence (LME) [3, 4] in combination with leave-one-out cross-validation (CV) as a measure of model quality. On the second level, we use random-effects Bayesian model selection (RFX BMS) [5, 6, 7] for population inference within a model space. Results: We analyzed two empirical data sets. Using data on orientation pop-out processing [8], we identify a psychologically plausible model as the best explanation of neural processing in orientation-sensitive area V4 and confirm the contra-lateral nature of visual perception via model selection. Using data from a conflict adaption paradigm [9], we dissociate preferences for block-design and event-related models in the brain and identify the optimal modelling of reaction times in task-related regions. Discussion: In this work, we have introduced a method of model selection for GLMs for fMRI. Our empirical examples show that this method is able to compare competing computational models of neural processing as well as to serve reasonable model specification during first-level fMRI data analysis. We therefore envisage application for methodological control in cognitive neuroscience as well as theory selection in computational neuroscience. Acknowledgements This work was supported by a Research Track Scholarship from the Humboldt Graduate School (J.S.) and an Elsa Neumann Scholarship from the State of Berlin (J.S.).

139

.

1. Bernstein Center for Computational Neuroscience, Berlin, Germany 2. Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany 3. Berlin Center of Advanced Neuroimaging, Berlin, Germany 4. Berlin School of Mind and Brain, Berlin, Germany 5. Excellence Cluster NeuroCure, Charité-Universitätsmedizin, Berlin, Germany 6. Department of Neurology, Charité-Universitätsmedizin, Berlin, Germany doi: 10.12751/nncn.bc2015.0131

Posters Wednesday References 1 Friston KJ, Holmes AP, Worsley KJ, Poline JP, Frith CD, Frackowiak RSJ (1995). Statistical Parametric Maps in Functional Imaging: A General Linear Approach. Human Brain Mapping, vol. 2, pp. 189-210 10.1002/hbm.460020402 2 Razavi M, Grabowski TJ, Vispoel WP, Monahan P, Mehta S, Eaton B, Bolinger L (2003). Model Assessment and Model Building in fMRI. Human Brain Mapping, vol. 20, pp. 227-238 10.1002/hbm.10141 3 Koch KR (2007). Introduction to Bayesian Statistics. Springer, ch. 4.3.2, pp. 117-121 4 Bishop CM (2006). Pattern Recognition and Machine Learning. Springer, ch. 3.4, pp. 161-165 5 Stephan KE, Penny WD, Daunizeau J, Moran RJ, Friston KJ (2009). Bayesian Model Selection for Group Studies. NeuroImage, vol. 46, pp. 1004-1017 10.1016/j.neuroimage.2009.03.025 6 Penny WD, Stephan KE, Daunizeau J, Rosa MJ, Friston KJ, Schofield TM, Leff AP (2010). Comparing Families of Dynamic Causal Models. PLoS ONE, vol. 6, art. e1000709 10.1371/journal.pcbi.1000709 7 Rosa MJ, Bestmann S, Harrison L, Penny W (2010). Bayesian model selection maps for group studies. NeuroImage, vol. 49, pp. 217-224 10.1016/j.neuroimage.2009.08.051 8 Bogler C, Bode S, Haynes JD (2013). Orientation pop-out processing in human visual cortex. NeuroImage, vol. 81, pp. 73-80 10.1016/j.neuroimage.2013.05.040 9 Meyer A, Soch J, Haynes JD (in prep.). Decoding behavioral adaptation under stimulus conflict

[W 11] Prediction of Cognitive Decline on a Continuous Scale in Alzheimer’s Disease: A Comparison of Different Model Classes Enny H. van Beest1,2,3 , Kerstin Ritter2,3 , Carsten Allefeld2,3 , John-Dylan Haynes2,3,4 1. Vision and Cognition, Netherlands Institute for Neuroscience, Meibergdreef 47, 1105 BA Amsterdam, The Netherlands 2. Bernstein Center for Computational Neuroscience, Charité-Universitätsmedizin Berlin, Haus 6, Philippstraße 12, 10115 Berlin, Germany 3. Berlin Center for Advanced Neuroimaging, Charité-Universitätsmedizin Berlin, Sauerbruchweg 4, Charitéplatz 1, 10117 Berlin, Germany 4. Berlin school of Mind and Brain, Humboldt universität, Luisenstraße 56, Haus 1, 10117 Berlin, Germany doi: 10.12751/nncn.bc2015.0132

In this study, we compared different model classes on their performance in predicting cognitive decline in elderly single subjects after 6, 12 and 24 months, based on baseline data measured at 0 months. Data of healthy controls, mild cognitive impaired and Alzheimer’s disease patients were obtained from the database of the Alzheimer’s Disease Neuroimaging Initiative [1]. Cognitive decline was measured by two continuous neuropsychological (NP) scores; Mini Mental State Exam (MMSE) and Alzheimer’s Disease Assessment Scale (ADAS). Baseline scores included MRI, PET, CSF proteins, gene information and NP scores. Three different comparisons were made. The first was Gaussian processes versus structural risk minimization, the second single task learning versus multi task learning, and the third multimodality integration versus concatenation (implementations and toolboxes: [2–5]). An additional question was whether we need multiple modalities at all, or whether single modalities could predict NP scores just as accurate. Using all modalities combined, we found a correlation between 0.77 and 0.90 between true and predicted scores for every model class. When comparing different models, we found that 1) model classes with structural risk minimization performed better than those with Gaussian processes and 2) using multiple modalities in an integrated way gave higher prediction accuracies than using concatenation or single modalities. However, these effects were small (differences in correlation between models were all smaller than 0.1) and performance depended on time point of prediction and clinical group. In our models we assumed that prediction of cognitive decline can be solved as a linear combination of input features, whereas several models describe biomarkers to develop in a dynamic way over time [6]. We believe we can further improve our most 140

accurate model - structural risk minimization with integrated modalities - by taking more complex weights into account.

.

a-c show Fisher-transformed r-values and e-g log-transformed MSE-values, grouped according to different comparisons. d,h show corrected confidence intervals per maineffect. Acknowledgements Ritter, van Beest and Haynes designed the study. van Beest and Ritter conducted the study. Haynes supervised the study. Allefeld assisted with statistical analysis. The authors thank Dr. André Marquan References 1 ADNI | Background & Rationale, http://adni.loni.usc.edu/study-design/background-rationale/, accessed on May 22, 2015. 2 Rasmussen C.E., Nickish, H. (2013). The GPML Toolbox version 3.5. Toolbox. 1-32 3 Zhou, J., Chan, J., Ye, J. (2012). MALSAR: Multi-task Learning via Structural Regularization. Arizona State Univ. 4 Marquand, A.F., Brammer, M., Williams, S.C.R., Doyle, O.M. (2014). Bayesian multi-task learning for decoding multi-subject neuroimaging data. Neuroimage. 92, 298-311 10.1016/j.neuroimage.2014.02.008 5 Jalali, A., Ravikumar, P., Sanghavi, S., Ruan, C., (2010). A Dirty Model for Multi-task Learning. Nips. 1–9 6 Jack Jr, C.R., Knopman, D.S., Jagust, W.J., Shaw, L.M., Aisen, P.S., Weiner, M.W., Petersen, R.C., Trojanowski, J.Q., (2010). Hypothetical model of dynamic biomarkers of the Alzheimer s pathological cascade. Lancet, The. 9, 1–20 doi:10.1016/S1474-4422(09)70299-6

[W 12]

Statistical Inference in Networks with Hidden Units

Stojan Jovanovic1,2 , Benjamin Adric Dunn3 , Yasser Roudi3,4 , John Hertz4,5 1. Bernstein Center Freiburg, Faculty of Biology, Albert-Ludwigs University, Hansastrasse 9a, 79106 Freiburg, Germany 2. Computational Biology, KTH Royal Institute of Technology, Lindstedsvaegen 24, S-11428 Stockholm, Sweden 3. Kavli Insitute for Systems Neuroscience, NTNU, 7030 Trondheim, Norway 4. NORDITA, Roslagstullsbacken 23, S-10691 Stockholm, Sweden 5. Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen, Denmark doi: 10.12751/nncn.bc2015.0133

With recent advances in high-throughput recordings, researchers are turning to statistical models to interpret the data. These methods, however, are limited by the extent to which the population is covered. The effect of what remains hidden on what can be inferred is currently not understood and poses a significant challenge. Here, we have sought for ways to understand and correct the effects of sub-sampling in inference for 141

Posters Wednesday the cases of kinetic Ising and Generalized Linear Models. In this work, we derive a second order method to account for these errors by explicitly including hidden nodes and then, using approximation techniques, marginalize out their effect. Through application of this framework on Ising networks of varying relative population size and coupling strength, we asses how these unknown variables can influence inference and to what degree they can be accounted for. Acknowledgements Supported by Erasmus Mundus Joint Doctoral programme EuroSPIN and the German Federal Ministry of Education and Research (BMBF 01GQ0420 to BCCN Freiburg). References 1 Benjamin Dunn and Yasser Roudi, Learning and inference in a nonequilibrium Ising model with hidden nodes http://dx.doi.org/10.1103/PhysRevE.87.022127 2 Joanna Tyrcha and John Hertz, Network Inference with Hidden Units arXiv:1301.7274

[W 13] bursts

Entropy based quantification of in vivo and in vitro neuronal

Fikret Emre Kapucu1 , Jarno E. Mikkonen2 , Jarno M.A. Tanskanen1 , Jari A.K. Hyttinen1 1. Department of Electronics and Communications Engineering, Computational Biophysics and Imaging Group, Tampere University of Technology, Tampere, Finland 2. Department of Psychology, Oscillatory Brain Research Group, University of Jyväskylä, Jyväskylä, Finland doi: 10.12751/nncn.bc2015.0134

Background: Typically neuronal activity is described by spikes and bursts [1, 2, 3]. Bursts are important information content and influence on the plasticity [4, 5]. Thus, they are widely assessed for understanding the network behaviors. Deriving parameters and metrics from the neuronal bursts is a common procedure. Previously bursts have been assessed and classified with respect to several parameters such as burst duration, burst amplitude,inter burst interval (IBI) and power spectrum [6,7, 8, 9]. On the other hand, bursts should be detected automatically and without any biased or predefined criteria to prevent subjectivity [10]. Also, freely defined bursts can be characterized further according to desired metrics. Standardized and more robust analysis methods and parameters are needed for evaluating neuronal networks’ responses to different treatments [11]. Addition to commonly practiced parameters, new metrics would provide additional and enhanced information. Aim: Our aim is to derive metrics for burst quantification which can be used for enhanced network analysis. Regularity/complexity of time series can be quantified by entropy measures and obtained entropy values can be used to analyze neuronal networks. Methods: We conduct the following procedure: • An adaptive burst detection algorithm is employed to select burst start and end times. • Different entropy values are calculated for burst epochs. • Statistics are calculated for entropy values of different conditions. Results: We showed that: • entropy measures can be used for quantifying neuronal bursts. • automatized adaptive burst detection algorithm allows quantifying large data sets to obtain statistically significant results. • there are differences between different entropy measures. Conclusions: Our results indicate that the proposed metrics are potential for the similar tasks and further studies would be possible to interpret entropy values from the point of view of the neuronal network synchrony. 142

Acknowledgements The work of Kapucu and Tanskanen have been supported by the 3DNeuroN project in the European Union’s Seventh Framework Programme, Future and Emerging Technologies, grant agreement n°296590.

.

References 1 E. R. Kandel and W. A. Spencer, “Electrophysiology of hippocampal neurons. II. After-potentials and repetitive firing,” J. Neurophysiol., vol. 24, pp. 243–259, 1961. Available: http://jn.physiology.org/content/24/3/2 2 B. W. Connors, M. J. Gutnick, and D. A. Prince, “Electrophysiological properties of neocortical neurons in vitro,” J. Neurophysiol., vol. 48, pp. 1302–1320, 1982. Available: http://jn.physiology.org/content/48/6/1302 3 C. M. Gray and D. A. McCormick, “Chattering cells: superficial pyramidal neurons contributing to the generation of synchronous oscillations in the visual cortex,” Science, vol. 274, pp. 109–113, 1996. DOI:10.1126/science.274.5284.109 4 J. E. Lisman (1997). Bursts as a unit of neural information: making unreliable synapses reliable. Trends Neurosci. 20, 38–43 5 P. Massobrio, J. Tessadori, M. Chiappalone, and M. Ghirardi, “In Vitro Studies of Neuronal Networks and Synaptic Plasticity in Invertebrates and in Mammals Using Multielectrode Arrays,” Neural Plasticity, vol. 2015, Article ID 196195, 18 6 D. A. Wagenaar, J. Pine, and S. M. Potter, “An extremely rich repertoire of bursting patterns during the development of cortical cultures,” BMC Neuroscience, vol. 7, no. 11, 2006. 7 R. A. J. van Elburg and A. van Ooyen, “A new measure for bursting,” Neurocomputing, vol. 58–60, pp. 497–502, 2004. 8 W. Bair, C. Koch, W. Newsome, and K. Britten, “Power spectrum analysis of bursting cells in area MT in the behaving monkey,” J. Neurosci., vol. 14, pp. 2870–2892, 1994. 9 E. W. Keefer, A. Gramowski, D. A. Stenger, J. J. Pancrazio and G. W. Gross, “Characterization of acute neurotoxic effects of trimethylolpropane phosphate via neuronal network biosensors,” Biosens. Bioelectron., vol. 16, pp. 513–525, 2001. 10 F. E. Kapucu, J. M. A. Tanskanen, J. E. Mikkonen, L. Ylä-Outinen, S. Narkilahti, and J. A. K. Hyttinen, ”Burst analysis tool for developing neuronal networks exhibiting highly varying action potential dynamics,” Front. Comput. Neurosci., vol. 6, no. 11 R. Äänismaa, L. Ylä-Outinen, J.E. Mikkonen, S. Narkilahti (2011). “Human pluripotent stem cellderived neuronal networks: their electrical functionality and usability for modelling and toxicology,” in Methodological Advances in the Culture, Manipulation

[W 14] An Algorithm to Automatically Set Neuronal Action Potential Spike Detection Thresholds Jarno M. A. Tanskanen1 , Fikret Emre Kapucu1 , Jari A. K. Hyttinen1 1. Department of Electronics and Communications Engineering, Tampere University of Technology, and BioMediTech, Finn-Medi 1 L 4, Biokatu 6, FI-33520 Tampere, Finland doi: 10.12751/nncn.bc2015.0135

Action potential spike detection [1,2] is most often performed by thresholding with the thresholds set by a human operator or by convention at a multiple of the standard deviation of noise. Usually, the thresholds are symmetric for negative and positive spikes. Here, we propose a method to automatically set spike detection thresholds separately for positive and negative spikes. This is done by analyzing signal amplitude histograms and their gradients. From data, first, an amplitude histogram is formed, and its gradient calculated. Next, local minima and maxima of the gradient are found, and the corresponding amplitudes are used as thresholds. The method depends on a shoulder-like feature in the signal amplitude histogram to appear close to the maximum noise amplitude; the action potential spikes and the noise form distinct features in the amplitude histogram, so that the contributions of action potentials reaching sufficiently above the noise floor can be identified. The shoulder-like feature is made better detectable by the calculation of the gradient. The concept was proposed in [3] and here the method is applied to a larger data, also studying the limitations of the method with regard to the properties of data. We demonstrate the method by action potential microelectrode measurement simulations for a round and flat in vitro culture and for a hemispherical in vivo 3D brain volume, 143

Posters Wednesday and then apply the method on in vitro cell culture data and on in vivo rat cortical data. Whereas the method is essentially an ad hoc method, the resulting thresholds are mostly well in line with possible human operator decisions. Acknowledgements This research has been supported by the 3DNeuroN project in the European Union’s Seventh Framework Programme, Future and Emerging Technologies, grant agreement n°296590. References 1 M. S Lewicki, “A review of methods for spike sorting: the detection and classification of neural action potentials,” Netw. Comput. Neural Syst., vol. 9, no. 4, 1998, pp. R53–R78. 2 S. B. Wilson and R. Emerson, “Spike detection: a review and comparison of algorithms,” Clin. Neurophysiol., vol. 113, no. 12, Dec. 2002, pp. 1873–1881. 3 J. M. A. Tanskanen, F. E. Kapucu, and J. A. K. Hyttinen, “On the threshold based neuronal spike detection, and an objective criterion for setting the threshold,” in Proc. 7th Ann. Int. IEEE EMBS Conf. Neural Eng., France, Apr. 2015, pp. 1016–1019.

[W 15] Single-trial synaptic conductance estimation by a novel firing-clamp method Anton Chizhov1 , Michael Druzin2 , Evgeniya Malinina2 , Lyle Graham3 , Staffan Johansson2 1. Ioffe Physical-Technical Institute of RAS, Polytechnicheskaya str. 26, 194021 St.-Petersburg, Russia 2. Section for Physiology, Department of Integrative Medical Biology, Umea University, Umea, Sweden 3. Neurophysiology and New Microscopies Laboratory, INSERM U603-CNRS UMR 8154, Université Paris Descartes, Paris, France doi: 10.12751/nncn.bc2015.0136

A novel method of estimation of excitatory and inhibitory synaptic conductances in a single trial patch-clamp recordings is proposed for studies of spontaneous events [1]. Two values of the conductances are estimated from the characteristics of artificially evoked spikes, namely the spike amplitude and subthreshold potential, sensitive to two input signals. The probe spikes with fixed frequency 200Hz are evoked in the dynamic-clamp mode. The estimation procedure includes stages of calibration, recording and data analysis, implemented in a specially developed software. The method has been verified with experimental recordings in isolated neurons under the conditions of excitatory and inhibitory agonist application. Acknowledgements The work has been supported by the Russian Foundation for Basic Research with the grant 15-04-06234a References 1 A. Chizhov, E. Malinina, M. Druzin, L.J. Graham, S. Johansson. Firing clamp: A novel method for single-trial estimation of excitatory and inhibitory synaptic neuronal conductances. // Frontiers in Cellular Neuroscience, 8:86, (8 pages), 2014 10.3389/fncel.2014.00086

144

[W 16] Early cortical spontaneous activity reflects the structure of mature sensory representations Bettina Hein1 , Gordon D Smith2 , David Whitney2 , Klaus Neuschwander1 , David Fitzpatrick2 , Matthias Kaschube1

Although spontaneous patterns of neural activity are thought to play an important role in the development of cortical circuits, relatively little is known about the structure of spontaneous activity in the developing cortex and its relation to mature sensory representations. We sought to determine how early patterns of spontaneous activity are related to stimulus evoked patterns in the same animal later in development. Here, we took advantage of the columnar architecture in ferret visual cortex to visualize patchy patterns of spontaneous activity prior to the onset of visual experience and the emergence of the orientation preference map. We used the highly sensitive calcium indicator GCaMP6s to reveal population activity on a single trial basis in chronic recordings of the developing ferret visual cortex. Novel analytical approaches were used to uncover interpretable statistical relations from these data. Prior to eye opening, the correlation structure of spontaneous cortical activity displays robust columnar patterns that resemble the mature organization of the orientation preference map. Although visual stimulation through the closed eyelids evokes strong patterns of activity prior to eye opening, the orientation preference map can only be evoked by visual stimulation after eye opening. We conclude that early spontaneous patterns of cortical activity exhibit an orderly columnar structure that forms the basis for building sensory evoked representations during cortical development. Acknowledgements Grant/Other Support: BFNT 01GQ0840 (MK) F32EY022001 (GBS) RO1EY011488 (DF)

[W 17] Multiple change point detection in spike trains: Comparison of different parametric CUSUM methods Lena Koepcke1 , Jutta Kretzberg1,2 1. Computational Neuroscience, University Oldenburg, 26129 Oldenburg, Germany 2. Cluster of Excellence "Hearing4all", University Oldenburg, 26129 Oldenburg, Germany doi: 10.12751/nncn.bc2015.0138

It is a crucial task for sensory systems to detect changes in external stimuli based on neuronal responses. In neuroscientific data analysis, the CUSUM (cumulative sum) method is a standard approach to detect a single stimulus change [1]. The basic step of this method consists of recursive calculations of a cumulative sum. When the cumulative sum exceeds a certain threshold, a change point is found [2]. When applying the CUSUM method, assumptions have to be made on: 1) the distribution of the data, 2) the type of shifts in the spike rate. Here, we study under which assumptions an unknown number of multiple stimulus changes can be detected in a spike train. For detection of strong changes in spike rate, which indicate stimulus changes, we compare the four combinations of 1) Poisson or Gaussian distributions and 2) multiplicative or 145

.

1. Neuroscience, Frankfurt Institute for Advanced Studies, Ruth-Moufang-Str1, 60438 Frankfurt am Main, Germany 2. Functional Architecture and Development of Cerebral Cortex, Max-Planck-Florida Institute, Jupiter, Fl 33458, USA doi: 10.12751/nncn.bc2015.0137

Posters Wednesday additive shifts as different CUSUM assumptions. These assumptions were evaluated for multi-electrode recording data from retinal ganglion cells. The retina was stimulated with a dot pattern, which moved with one of five different speeds along one axis in two directions. Stimulus changes caused either increases or decreases in spike rates. When testing the detection performance for both types of changes and all four combinations of CUSUM assumptions, we found: Single vs. multiple stimulus changes: The single stimulus change approach yields slightly better detection performance because of a lower false positive rate. Poisson vs. Gaussian distribution: Comparing both additive models the Gaussian model performed slightly better, especially in the false positive rate. Both multiplicative models achieved similar performances. Multiplicative vs. additive shift: Both Poisson models yielded similar results, while for the Gaussian models the additive approach was superior. In summary, it was possible to detect multiple stimulus changes with all four pairs of assumption. The Gaussian model with an additive shift achieved the best performance. References 1 Ellaway PH, 1978 10.1016/0013-4694(78)90017-2 2 Basseville M, 1988 10.1016/0005-1098(88)90073-8

[W 18] V1

A statistical characterization of neural population responses in

Giacomo Bassetto1,2 , Florian Sandhaeger1,2 , Alexander Ecker1,2,3,4 , Jakob H. Macke1,2,5 1. Max Planck Institute for Biological Cybernetics, Tübingen, Germany 2. Bernstein Center for Computational Neuroscience, Tübingen, Germany 3. Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America 4. Werner Reichardt Center for Integrative Neuroscience and Institute for Theoretical Physics, University of Tübingen, Germany 5. Research Center Caesar, Bonn, Germany doi: 10.12751/nncn.bc2015.0139

Population activity in primary visual cortex exhibits substantial variability that is correlated on multiple time scales and across neurons [1]. A quantitative account of how visual information is encoded in population of neurons in primary visual cortex therefore requires an accurate characterization of this variability. Our goal is provide a statistical model for capturing the statistical structure of this variability and its dependence on external stimuli, with particular focus on temporal correlations both on short (withintrial) and long (across-trial) time-scales [2]. We address this question using neural population recordings from primary visual cortex in response to drifting gratings [3], using the framework of generalized linear models (GLMs). To model stimulus-driven responses, we take a non-parametric approach and employ Gaussian-process priors to model the smoothness of response-profiles across time and different stimulus orientations, and low-rank constraints to facilitate inference from limited data. We find that the parameters which control the prior smoothness are consistent across neurons within each recording session, but differ markedly across recordings. For most neurons, the time-varying response across all stimulus orientations can be well captured using a lowrank decomposition with k = 4 dimensions. To capture slow modulations in firing rates, we include covariates in the GLM which are constrained to vary smoothly across trials, and find that including these terms leads to significant improvements in goodness-of-fit. 146

Finally, we use latent dynamical systems [3] with point-process observation models [4] to capture variations and co-variations in firing rates on fast time-scales. While we focus our analysis on modelling neural population responses in V1, our approach provides a general formalism for obtaining an accurate quantitative model of response variability in neural populations. Acknowledgements This study is part of the research program of the Bernstein Center for Computational Neuroscience, Tübingen, funded by the German Federal Ministry of Education and Research (BMBF; FKZ: 01GQ1002). References 1 M.A. Smith and A. Kohn, "Spatial and temporal scales of neuronal correlation in primary visual cortex", The Journal of Neuroscience 28(48), 12591-12603 (2008) 10.1523/JNEUROSCI.2929-08.2008 2 R.L.T. Goris, A. Movshon and E.P. Simoncelli, "Partitioning neuronal variability", Nature Neuroscience 17, 858-865 (2014) 10.1038/nn.3711 3 A.S. Ecker, P. Berens, R.J. Cotton, M. Subramaiyan, G.H. Denfield, C.R. Cadwell, S.M. Smirnakis, M.Bethge, and A.S. Tolias, "State dependence of noise correlations in macaque primary visual cortex", Neuron 82(1), 235-248 (2014) 10.1016/j.neuron.2014.02.006 4 J. Macke, L. Buesing, J.P. Cunningham, B.M. Yu, K.V. Shenoy, and M. Sahani, "Empirical models of spiking in neural populations", in Advances in Neural Information Processing Systems (NIPS) 24, 2011

.

[W 19] Assessing pair-wise correlations from neural spike trains: method comparison and application to the data. Veronika Koren1,2 , Timm Lochmann1 , Valentin Dragoi3 , Klaus Obermayer1,2 1. Neural Information Processing, Technische Universitaet Berlin, Sekretariat MAR 5-6, Marchstrasse 23 D-10587 Berlin, Germany 2. Bernstein Center for Computational Neuroscience Berlin, Humboldt-Universitaet zu Berlin, Philippstr. 13 House 6; 10115 Berlin, Germany 3. Department of Neurobiology and Anatomy, University of Texas-Houston Medical School, 6431 Fannin St., Houston , TX 77030 Suite 7.166, USA doi: 10.12751/nncn.bc2015.0140

Task relevant sensory information is contained in both typical neural responses and their interaction patterns across cortical circuits. In visual cortices of primates, a number of studies report positive but relatively weak pair-wise correlations. Numbers vary from study to study, some studies reporting very low correlations close to zero [1], others substantially higher values [2]. Besides differences regarding task and experimental setup, the exact statistical measures used to describe the data contribute to the differences in reported degree of pair-wise interactions. Here, we discuss and compare two common techniques for assessing pairwise correlations from spike trains, correlations from spike counts (rSC) and a measure based on cross-correlograms (rCCG). The two methods are mathematically equivalent under certain conditions [3] but can offer complementary information regarding temporal coordination when applied to empirically observed spiketrains. Traditionally, rSC is computed from spike counts at the timescale of a trial and provides a measure of the trial-to-trial co-variability. To capture stimulus-driven temporal coordination on faster timescales, we refine this method and apply it to spike trains binned at finer temporal precision allowing us to also get better estimates of stimulus-independent noise correlations. An alternative measure of correlation can be computed by summing over the central part of the cross correlogram (ccg). The exact value of this correlation coefficient will depend on how the raw ccg is corrected and we compare estimates based on shuffling across trials and shuffling within trials. We apply both methods to spike trains in V1 and V4 of behaving monkeys and show how 147

Posters Wednesday correlations depend on the distance between cells, the stimulus, and correct vs. incorrect responses. We report different correlated response patterns during the stimulus-driven versus quiescent part of neural responses and discuss their potential impact on behavior.

All data from one session in V4. Up: Autocorrelograms and cross-correlogram for a pair of cells. Correlation function for individual pairs in V4, mean and variance. Down left: rSC for different behavior of the monkey. right: Comparison r_sc/r_ccg. References 1 Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, Tolias AS: Decorrelated neuronal firing in cortical microcircuits. Science 2010 327:584-587. 10.1126/science.1179867. 2 Gutnisky DA, Dragoi V. Adaptive coding of visual information in neural populations.Nature. 2008 Mar 13;452(7184):220-4. 10.1038/nature06563. 3 Bair W, Zohary E, Newsome WT., Correlated firing in macaque visual area MT: time scales and relationship to behavior.J Neurosci. 2001 Mar 1;21(5):1676-97

[W 20] Skeleton Designs for Vision Systems: A case study in Systems Engineering for Vision Rudra Narayan Hota1 , Visvanathan Ramesh1,2 1. Informatics and mathematics, Johann Wolfgang Goethe Universitat, Frankfurt am Main, Germany 2. Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany doi: 10.12751/nncn.bc2015.0141

Consider the following scenario. An industrial customer comes up with a computer vision problem to be solved, has a vague problem definition, is able to outline a task requirement in terms of the type of vision function he/she wants to be realized with multi-faceted performance requirements in terms of cost, performance, and accuracy. The job of the vision systems engineer is to translate this request to specific design options and provide a quick decision on the viability of the solution. This application scenario is addressed by our model-based systems engineering methodology developed in [1] as it provides a way of rapidly synthesizing and evaluating vision system designs. In past work [2,3], we have described how vision can be seen as a two stage process involving indexing followed by detailed estimation. Feed-forward modules perform the role of indexing, while belief propagation, sampling, or deliberation performs the role of detailed refinement of the hypotheses. Thus, while coming up with skeleton designs, we begin with a contextual model, task, and performance spec and translate it to choice of quasi-invariants, combinations, and detailed estimation. This design process is illustrated in a case study involving a vision based detection, tracking and segmentation system for camouflaged cephalopod in a laboratory fish tank setting. The robustness requirement involves dealing with: a) an object that can deform and change shape 148

suddenly, has varied acceleration and motion profiles, and can change appearance; and b) the turbulence and lighting changes in the underwater environment. We illustrate how a library of quasi-invariants (with known model-based semantics) developed in [4,5,6] could be used as the starting point for our design and how knowledge of prior distributions on size and motion enable us to design a feed-forward architecture for detection, segmentation and tracking. Acknowledgements This work was funded by the German Federal Ministry of Education (BMBF), project 01GQ0840 and 01GQ0841 (BFNT Frankfurt). We are thankful to Prof. Gilles Laurent of MPI, for the problem and data.

.

References 1 V. Ramesh, “Performance Characterization of Image Understanding Algorithms,” Ph.D. Dissertation, Department of Electrical Engineering, University of Washington, 1995 2 Greiffenhagen, M, et al. Design, analysis, and engineering of video monitoring systems: an approach and a case study. Proceedings of the IEEE89.10 (2001) 3 Ramesh V, von der Malsburg C (2013) Systems Engineering for Visual Cognition. 10.12751/nncn.bc2013.0228 4 I. Zoghlami, D. Comaniciu, and V. Ramesh. Illumination invariant change detection, 2005. US Patent App. 11/066,772 5 M. Singh, V. Parameswaran, and V. Ramesh. Order consistent change detection via fast statistical significance testing. In Computer Vision and Pattern Recognition, CVPR 2008 6 X. Gao, V. Ramesh, and I. Zoghlami. Spatial-temporal image analysis in vehicle detection systems, 2008. US Patent App. 11/876,975

[W 21] Distinguishing between pre- and post-incision under general anesthesia by spectral and recurrence analysis of EEG data Mariia Fedotenkova1,2,3 , Axel Hutt1,2,3 , Peter beim Graben1,2,3,4 , James W. Sleigh5 1. Université de Lorraine, Villers-lès-Nancy, F-54600, France 2. UMR nº 7503, CNRS, Loria, Vandœuvre-lès-Nancy, F-54500, France 3. NEUROSYS team, INRIA, Villers-lès-Nancy, F-54600, France 4. Bernstein Center for Computational Neuroscience, Berlin, Germany 5. Department of Anesthesia, Waikato Clinical School of the University of Auckland, Waikato Hospital, Hamilton 3206, New Zealand doi: 10.12751/nncn.bc2015.0142

Nowadays, surgical operations are impossible to imagine without general anaesthesia, which involves loss of consciousness, immobility, amnesia and analgesia. Understanding mechanisms underlying each of these effects guarantees well-controlled medical treatment. Our work focuses on analgesia effect of general anaesthesia, more specifically, on patients reaction on nociception stimuli. The study was conducted on dataset consisting of 230 EEG signals: pre- and post-incisional recordings for 115 patients, who received desflurane and propofol [1]. Initial analysis was performed by power spectral analysis, which is a widespread approach in signal processing. Power spectral information was described by fitting the background activity and measuring power contained in delta and alpha bands according to power of background activity. The fact that power spectrum of background activity decays as frequency increasing is well known and thoroughly studied [2]. Here, traditional 1/f α behaviour of the decay was replaced by a Lorentzian model to describe the power spectrum of background activity. Due to observed non-stationary nature of EEG signals spectral analysis does not suffice to reveal significant changes between two states. A further improvement was done by expanding spectra with time information. To obtain time-frequency representations of the signals conventional spectrograms were used as well as a spectrogram reassignment technique [3]. The latter allows to ameliorate readability of a spectrogram by reassigning energy contained in spectrogram to more precise positions. Subsequently, obtained spectrograms were used in recurrence analysis 149

Posters Wednesday [4] and its quantification by complexity measure. Recurrence analysis allows to describe and visualise dynamics of a system and discover structural patterns contained in the data. Structure of each recurrence plot is characterised by Lempel–Ziv complexity measure [5], which shows a difference between pre- and post-incision. References 1 James W. Sleigh, Leslie, K. & Voss, L. The effect of skin incision on the electroencephalogram during general anesthesia maintained with propofol or desflurane. Journal of clinical monitoring and computing 24, 307–18 (2010) 10.1007/s10877-010-9251-3 2 Bédard, C. & Destexhe, A. Macroscopic Models of Local Field Potentials and the Apparent 1/f Noise in Brain Activity. Biophysical Journal 96, 2589–2603 (2009) 10.1016/j.bpj.2008.12.3951 3 Auger, F. et al. Time-Frequency Reassignment and Synchrosqueezing: An Overview. IEEE Signal Processing Magazine 30, 32–41 (2013) 10.1109/MSP.2013.2265316 4 Marwan, N., Carmenromano, M., Thiel, M. & Kurths, J. Recurrence plots for the analysis of complex systems. Physics Reports 438, 237–329 (2007) 10.1016/j.physrep.2006.11.001 5 Zhang, X.-S., Roy, R. J. & Jensen, E. W. EEG complexity as a measure of depth of anesthesia for patients. Biomedical Engineering, IEEE Transactions on 48, 1424–1433 (2001) 10.1109/10.966601

[W 22] Using Random Forest (RF) as a transfer learning classifier for detecting Error-Related Potential (ErrP) within the context of P300-Speller Anwar Isayed1 , Hashem Tamimi1 1. Department of Information Technology, Palestine Polytechnic University, P.O.Box. 198, Hebron, Palestine doi: 10.12751/nncn.bc2015.0143

P300-Speller paradigm presents one of the most widely and few breakthroughs for constructing Brain Computer Interface (BCI) systems. It allows subjects to select symbols without requiring neuromuscular control. When the subject recognizes a different target letter, an Error-Related Potential (ErrP) occurs as response. Typically, the researchers aims to build a classifier that requires intensive user training. Each subject must has one session at least for training a classifier and the result of the classifier depending on the size of training and refer to the independent subject. In this work, we used a fairly large group of subjects was taken from Perrin and colleagues. In first step, we applied an automate removal of eye movement and blink artifacts using Blind Source Separation (BSS) to recover ErrP signals from mixtures of ErrP signals with the noise that occurs from eye movement and blink artifacts. Then, we apply state of the art machine learning techniques for detecting ErrP in P300-Speller to construct a transfer learning classifier. Our results indicate that Error-Related Potential signals with 5th order low-pass Butterworth below 20Hz and time-samples from a pre-defined window between 200 and 1300ms after feedback onset are the best parameters to build a classifier on all the subjects. We show that it is possible to extract the most useful features from Anterior Cingulated Cortex by using Principal Component Analysis. We employ a Random Forest (RF) classifier to discriminate the Error-Related Potentials on all the subjects using Principal Component Analysis features. We obtained 78% Area Under Curve (AUC). To support our work, we compared RF with Linear Support Vector Machine (SVM). Our classifier outperforms the Linear Support Vector Machine because it can cope with the heterogeneity among different subjects. 150

The most important results

.

References 1 Towards the detection of error-related potentials and its integration in the context of a p300-speller brain computer interface 10.1016/j.neucom.2011.09.013 2 Error-related potential recorded by eeg in the context of a p300 mind speller brain computer interface 10.1109/MLSP.2010.5589217 3 A p300-based word typing brain computer interface system using a smart dictionary and random forest classi er. 4 An introduction to roc analysis 10.1016/j.patrec.2005.10.010 5 Error-related eeg potentials generated during simulated braincomputer interaction 10.1109/TBME.2007.908083 6 A novel p300-based brain-computer interface stimulus presentation paradigm: moving beyond rows and columns http://dx.doi.org/10.1016/j.clinph.2010.01.030 7 Automatic removal of eye movement and blink artifacts from eeg data using blind component separation 10.1111/j.1469-8986.2003.00141.x 8 Extended ica removes artifacts from electroencephalographic recordings 10.1.1.32.3352 9 A brain computer interface using electrocorticographic signals in humans 10.1088/1741-2560/1/2/001 10 Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks. 11 Moving away from error-related potentials to achieve spelling correction in p300 spellers 10.1109/TNSRE.2014.23744 12 Objective and subjective evaluation of online error correction during p300-based spelling http://dx.doi.org/10.1155/2 13 A comparison study of two p300 speller paradigms for brain computer interface 10.1007/s11571-0139253-1 14 Detecting and interpreting responses to feedback in bci 10.3389/fnhum.2012.00114 15 Human error in P300 speller paradigm for brain-computer interface. 10.1109/IEMBS.2007.4352840 16 Online detection of error-related potentials boosts the performance of mental typewriters 10.1186/14712202-13-19 17 Machine learning methodologies in p300 speller brain-computer interface systems http://ieeexplore.ieee.org/xpl/arti 18 Online detection of p300 and error potentials in a bci speller http://dx.doi.org/10.1155/2010/307254 19 Error-related potentials during continuous feedback http://dx.doi.org/10.3389/fnhum.2015.00155

[W 23] Morphological changes of vibration-sensitive interneurons in the honeybee brain indicate improved signal propagation Ajayrama Kumaraswamy1 , Kazuki Kai2 , Philipp Lothar Rautenberg1 , Hiroyuki Ai2 , Hidetoshi Ikeno3 , Thomas Wachtler1 1. Department of Biology II, Ludwig-Maximilians-Universität München, Martinsried, Germany 2. Department of Earth System Science, Fukuoka University, Fukuoka, Japan 3. School of Human Science and Environment, University of Hyogo, Himeji, Japan doi: 10.12751/nncn.bc2015.0144

Honeybees exibhit specific behavior based on their age and labor state. Waggle dance communication is one such age-dependent behavior which uses air vibration signals. We compared the morphologies of identified vibration-sensitive interneurons in the dorsal 151

Posters Wednesday lobe [1] obtained from young bees against those obtained from mature foragers. Using custom software [2], we reconstructed the morphologies of the neurons from LSM image stacks of dye-filled neurons. Neurons were registered to each other to facilitate spatial comparison [3]. We compared different parts of the neurons for differences in dendritic thickness and in dendritic density as measured by dendritic length per unit volume. We found that, in older bees, (i) dendritic density of the input region was significantly higher, and (ii) the region between the input and output branches, called the main branch, was significantly thicker than in younger bees. These differences suggest better signal collection and propagation in bees that participate in the waggle dance communication than in those that do not. Simulations of multi-compartment models of the reconstructed neurons confirmed that sinusoidal voltage signals in the frequency range 50-500Hz suffer lower attenuation in neurons from older bees than in younger bees. Further, we found a strong negative correlation between the average thickness and the total path length of the main branches of neurons from older bees, whereas no such correlation was found in young bees. This suggests conservation of surface area of the main branch in older bees. Since the spike initiation zone could be a part of the main branch [4, 5] and active membrane properties depend on surface area, these morphological adaptations could be a way of regulating spiking in older bees. Acknowledgements Supported by BMBF (grant 01GQ1116) and JST through the funding initiative "German - Japanese Collaborations in Computational Neuroscience” References 1 Ai, H. (2010). Vibration-processing interneurons in the honeybee brain. Frontiers in Systems Neuroscience 10.3389/neuro.06.019.2009 2 Ikeno, H. (2014) Reproducible segmentation method of neural morphology from LSM images. 11th International Congress of Neuroethology. 3 Kumaraswamy et al., (2014) Method for comparing and classifying morphologies of neurons based on spatial alignment. Proceedings of the Bernstein Conference 2014, 261-62. 10.12751/nncn.bc2014.0286 4 Günay and Prinz (2014) Estimation of spike initiation zone and synaptic input parameters of a Drosophila motoneuron using a morphologically reconstructed model. BMC Neuroscience 10.1186/14712202-15-S1-P65 5 Gouwens, N. W., & Wilson, R. I. (2009). Signal propagation in Drosophila central neurons. The Journal of Neuroscience 10.1523/JNEUROSCI.0764-09.2009

[W 24] Richness in the functional connectome depends on the neuronal integrity of its structural backbone Anton Lord1,2 , Meng Li1,3 , Anna Linda Krausse1,4 , Viola Borchardt1 , Ramona Demenescu1,2 , Johan van den Meer1,3,5 , Shijia Li1 , Hans Jochen Heinze1,2,3,6 , Michael Breakspear7 , Martin Walter1,2,3,4,6 1. Clinical Affective Neuroimaging Laboratory, Department of Behavioral Neurology, Leibniz institute for Neurobiology, Magdeburg, Germany 2. Leibniz institute for Neurobiology, Magdeburg, Germany 3. Department for Neurology, Otto-von-Guericke University, Magdeburg, Germany 4. Department of Psychiatry and Psychotherapy, Otto-von-Guericke University, Magdeburg, Germany 5. Department of Cognition and Emotion, Netherlands Institute for Neuroscience, An Institute of the Royal Academy of Arts and Sciences„ Amsterdam, Netherlands 6. Center of Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany 7. Systems neuroscience, QIMR Berghofer medical research institute, Brisbane, Australia doi: 10.12751/nncn.bc2015.0145

152

Quadratic relationship between the rich club coefficient and NAA/Cr. Linear effects of low and high NAA/Cr values also shown References 1 Van den Heuvel, Martijn P, and Olaf Sporns. 2011. “Rich-Club Organization of the Human Connectome.” The Journal of Neuroscience 31 (44) 10.1523/JNEUROSCI.3539-11.2011 2 Provencher, S W. 2001. “Automatic Quantitation of Localized in Vivo 1H Spectra with LCModel.” NMR in Biomedicine 14 (4): 260–64. 10.1002/nbm.698 3 Zijdenbos, Alex P., Reza Forghani, and Alan C. Evans. 2002. “Automatic ‘Pipeline’ Analysis of 3-D MRI Data for Clinical Trials: Application to Multiple Sclerosis.” IEEE Transactions on Medical Imaging 21: 1280–91. 10.1109/TMI.2002.806283

153

.

The human connectome is organised around a backbone of interconnected hubs called the rich club[1]. The posterior cingulate cortex (PCC) has extensive functional importance across multiple networks and so plays an important role as a functional hub of the brain. We sought to elicit the impact of local biochemical and structural factors in the PCC on the rich club in a healthy population. We studied properties of the rich club derived from resting state fMRI including the concentrations of n-acetylaspartate (NAA) and Creatinine (Cr) as well as cortical thickness estimates and whole brain functional connectivity in the same session. Imaging data were acquired from 48 healthy volunteers (33.08+-8.68 years, 35 male) recruited in Magdeburg, Germany. Functional connectivity was derived from resting state fMRI data, while NAA concentration was calculated as the ratio of NAA/Cr using the LC model[2] and cortical thickness was approximated using the CIVNET pipeline[3]. Graph analysis was used to identify the extent and influence of the rich club. Our data contain a core rich club of highly connected hubs, with a substantial number of members being subdivisions of the cingulate cortex, including the PCC. A quadratic relationship was observed between NAA/Cr concentration in the PCC and the RCC (p95%), appropriate physical properties enabled fast 3D neurite extension, and long-term gel stability enabled formation of active 3D neural networks from either primary rat cortical neurons or human induced pluripotent stem cell (hiPSC) derived neurons which were stable for more than a month. These minimal 3D culture models provide an intermediate between traditional 2D cultures and in vivo work, allowing the study of the role of various ECM cues in 3D neural network formation and function. 183

Posters Wednesday

Primary rat cortical neurons cultured in a PEG hydrogel in vitro for 5 days (color coded depth) Acknowledgements This research has been supported by the 3DNeuroN project in the European Union’s Seventh Framework Programme, Future and Emerging Technologies, grant agreement n°296590, the ETHZ, and FIFA/FMARC.

Sensory processing [W 57]

Connectivity of cones and bipolar cells in the mouse retina

Christian Behrens1,2,3 , Timm Schubert2,4 , Thomas Euler2,3,4 , Philipp Berens1,2,3,4 1. Institute for Theoretical Physics, University of Tübingen, Germany 2. Werner Reichardt Centre for Integrative Neuroscience (CIN), University of Tübingen, Germany 3. Bernstein Centre for Computational Neuroscience (BCCN), University of Tübingen, Germany 4. Institute for Ophthalmic Research, University of Tübingen, Germany doi: 10.12751/nncn.bc2015.0178

In the mouse retina, two types of cone photoreceptors – short (S) and medium (M) wavelength-sensitive cones – provide input to twelve types of cone bipolar cell. Type 1 bipolar cells exclusively receive input from M-cones, whereas type 9 bipolar cells selectively contact S-cones (reviewed by Euler et al., 2014). However, which cone types are contacted by the remaining bipolar cell types is largely unknown. We exploit the serial block-face scanning electron microscopy dataset provided by Helmstaedter et al. (2013) to systematically analyse the connections of cones and bipolar cells in the outer plexiform layer of the mouse retina. Using volume segmentation, we reconstruct all cone pedicles and identify S-cones based on their specific contacts with type 9 bipolar cells. We then analyze the existence and characteristics of contacts between the different cone and bipolar cell types, providing a comprehensive connectivity map of the outer plexiform layer. References 1 Euler et al. (2014) 10.1038/nrn3783 2 Helmstaedter et al. (2013) 10.1038/nature12346

184

[W 58] On the structure of population activity under fluctuations in attentional state Alexander S Ecker1,2,3,4 , George H Denfield4 , Andreas S Tolias3,4,5 , Matthias Bethge1,2,3

Attention is commonly thought to improve behavioral performance by increasing response gain and suppressing shared variability in neuronal populations. However, both the focus and the strength of attention are likely to vary from one experimental trial to the next, thereby inducing response variability unknown to the experimenter. Here we study analytically how fluctuations in attentional state affect the structure of population responses in a simple model of spatial and feature attention. In our model, attention acts on the neural response exclusively by modulating each neuron’s gain. Neurons are conditionally independent given the stimulus and the attentional gain, and correlated activity arises only from trial-to-trial fluctuations of the attentional state, which are unknown to the experimenter. We find that this simple model can readily explain many aspects of neural response modulation under attention, such as increased response gain, reduced individual and shared variability, increased correlations with firing rates, limited range correlations, and differential correlations. We therefore suggest that attention may act primarily by increasing response gain of individual neurons without affecting their correlation structure. The experimentally observed reduction in correlations may instead result from reduced variability of the attentional gain when a stimulus is attended. Moreover, we show that attentional gain fluctuations – even if unknown to a downstream readout – do not impair the readout accuracy despite inducing limited-range correlations.

[W 59] Neuronal networks underlying visually guided behaviors in zebrafish larvae Stephanie J Preuss1 , Chintan A Trivedi1 , Johann H Bollmann1 1. Biomedical Optics, Max Planck Institute for Medical Research, Jahnstrasse 29, 69120 Heidelberg, Germany doi: 10.12751/nncn.bc2015.0180

Processing sensory information is one of the most important functions of the nervous system and the basis of selecting appropriate behavioral output. The optic tectum (homologous to the mammalian superior colliculus), the major retino-recipient area in teleosts, is known to play a vital role in information processing during visually guided behaviors. Visual stimuli detected and preprocessed by the retina get directly mapped to different layers of tectal neuropil by retinal ganglion cell (RGC) axons. Tectal neurons pick up these signals and further process stimulus features, like motion direction and the size of an object. This processing may be crucial for extracting salient cues in a dynamic environment and for eliciting suitable motor patterns; for example appetitive or aversive behavior to prey- or predator-like cues, respectively. Here, we use realistic prey- and non-prey-like stimuli to elicit target-directed and aversive turns in immobilized zebrafish 185

.

1. Centre for Integrative Neuroscience and Institute for Theoretical Physics, University of Tübingen, Germany 2. Max Planck Institute for Biological Cybernetics, Tübingen, Germany 3. Bernstein Center for Computational Neuroscience, Tübingen, Germany 4. Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA 5. Department of Computational and Applied Mathematics, Rice University, Houston, TX, USA doi: 10.12751/nncn.bc2015.0179

Posters Wednesday larvae to investigate the role of different tectal cell types in this form of visually guided, goal-directed behavior [1]. Using peripheral motor nerve recordings, the activation strength of the musculature on the left and right side of the larval tail is recorded and compared to measure the potential direction of movement and therefore discriminate between distinct motor patterns. With two-photon targeted patch clamp we record tuning of individual tectal cells during these fictive behaviors at high temporal resolution. A comparison between response tuning curves and the morphological profile of individual neurons allows us to investigate structure-function relationships at the single neuron level for behaviorally relevant visual stimuli. This combined approach of behavioral monitoring, recording of single cell activity, cell morphology and population functional imaging will allow us to identify neural computations underlying object classification and perceptual decision making during goal-directed behavior. References 1 Classification of Object Size in Retinotectal Microcircuits 10.1016/j.cub.2014.09.012

[W 60] Synthesis of Distributed Cognitive Systems: An Approach to Learning Multisensory Fusion Cristian Axenie1 , Christoph Richter1 , Mohsen Firouzi1,2 , Jörg Conradt1 1. Neuroscientific Systems Theory Group, Technische Universität München, Karlstr. 45, 80333 Munich, Germany 2. Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Großhaderner Str. 2 D-82152 Planegg-Martinsried, Germany doi: 10.12751/nncn.bc2015.0181

Perception improves with experience, the acquisition of new exploration capabilities, and the development of new perception-action systems, such that internal representations are continuously refined to support more precise motor planning [1]. Both real and artificial systems reliably extract information from their noisy and partially observable environment and build an internal representation of surrounding space. The constructed representation subsequently defines the space of possible actions [2]. In this work we present the synthesis process of a distributed cortically inspired processing scheme for multisensory fusion. The initial structure is continuously adapted given available sensory inputs, by learning inter-sensory dependencies, and at the same time converging to a coherent representation of the sensory space and the cross-sensory relations. Using a distributed processing scheme, based on localized intelligence that ensures asynchronous information exchange and adaptation based on external real-world sensory stimuli, the framework ensures a fast, robust and scalable computational architecture appropriate for real-time applications. We instantiated the developed framework in various robotic scenarios: heading estimation and path integration for omnidirectional wheeled mobile robots [3] and 3D egomotion estimation for flying quadrotors. Alleviating the need for tedious design and parametrization, usually done in classical, dedicated sensor fusion models [4], the learning capabilities of the proposed framework provide flexible and robust synthesis of a novel processing infrastructure. Deploying a minimal system, able to learn its own constraints given sensory data, enhances the coherency of the sensory space representation. This approach to multisensory fusion makes it attractive for robotic applications which have to cope with increasingly complex operating environments. 186

Acknowledgements The authors would like to thank Matthew Cook of INI, ETH/University Zurich and Marcello Mullas of NST, TU Munich, for the intense discussions, and the Elite Network of Bavaria for funding the research References 1 E. J. Gibson, Principles of Perceptual Learning and Development, ACC Press, pp. 369-394, 1969. 2 R.W. Mitchel, Understanding the body: spatial perception and spatial cognition, in F. L. Dolins, R. W. Mitchell (Eds.), Spatial Cognition, Spatial Perception: Mapping the Self and Space, Cambridge University Press, Cambridge, pp. 341-364, 2010. 3 C. Axenie, J. Conradt, Cortically inspired sensor fusion network for mobile robot egomotion estimation, Robotics and Autonomous Systems, 2014. doi:10.1016/j.robot.2014.11.019 4 S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics, MIT Press, 2005.

[W 61] Causal Bayesian Inference in hierarchical distributed computation in the Cortex, towards a neural model 1. Neuroscientific System Theory, Department of Electrical and Computer Engineering, Technische Universität München, Karlstr. 45, 80333 Munich, Germany 2. Bernstein Center for Computational Neuroscience, Großhaderner Str. 2, 82152 PlaneggMartinsried, Germany 3. Center for Sensory Motor research, Department of Neurology, Ludwig-Maximilians-University, Marchioninistr. 15, 81377 Munich, Germany doi: 10.12751/nncn.bc2015.0182

To create a coherent spatial map of the world, intelligent agents including biological systems need to combine different representations of the environment within a computational strategy. Naive Bayesian Decision Theory is known as an optimal computational strategy to formulate the process of multisensory fusion in natural scenes [1]. When sensory signals are generated by separate sources, the rational strategy is to segregate (not to combine) information through a probabilistic inference process called Causal Inference [2]. Although it is not fully understood where and through which mechanisms Causal Inference emerges in the brain, a recent fMRI study during an Audio-Visual orientation task showed a distributed interaction amongst parietal and early sensory cortices [3-4]. We hypothesize that at the top of this hierarchy, a probabilistic decision process controls the flow of information between two alternative computations: fusion and segregation. In the cases when auditory and visual stimuli are largely congruent, the fusion circuit should not integrate information, otherwise the integration process should be instantiated in the posterior parts of the parietal cortex [4-5]. To evaluate this hypothesis and given the fact that Causality emerges through distributed computation in the brain, we propose a distributed neural model for Causal Inference and we show results of our investigation, how this model reproduces the behavior of human subjects. Acknowledgements This work is supported by the German Federal Ministry of Education and Research, Grant 01GQ0440 (BCCN - Munich). References 1 Seilheimer R. L, Rosenberg A, Angelaki D. E, “Models and processes of multisensory cue combination”, Current Opinion in Neurobiology, No.25, pp. 38–46 (2014) 10.1016/j.conb.2013.11.008 2 Körding K. P, Beierholm U, Ma W. J, Quartz S, Tenenbaum J. B, Shams L, “Causal Inference in Multisensory Perception”, PLOS one, September 26, (2007) 10.1371/journal.pone.0000943 3 Rohe T, Noppeney U, “Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception”, PLOS Biology, February 24, (2015) 10.1371/journal.pbio.1002073 4 Kayser C, Shams L, “Multisensory Causal Inference in the Brain”, PLOS Biology, February 24, (2015) 10.1371/journal.pbio.1002075 5 5. Firouzi M, Glasauer S, Conradt J, “Flexible Cue Integration by Line Attraction Dynamics and Divisive Normalization”, Proc. Of 26th International Conference on Artificial Neural Networks, Hamburg, Germany, 2014, pp. 691-698 (2014) 10.1007/978-3-319-11179-7_87

187

.

Mohsen Firouzi1,2 , Stefan Glasauer2,3 , Jörg Conradt1,2

Posters Wednesday

[W 62] Light-weight optic flow calculation on miniature computers using a V1-inspired algorithm Vilim Štih1 , Lukas Everding1 , Nicolai Waniek1 , Jörg Conradt1 1. Neuroscientific System Theory, TU München, Karlstraße 45, 5. OG., 80333 München, Germany doi: 10.12751/nncn.bc2015.0183

Natural vision systems are able to easily detect motion from environmental scenes. Speed and efficiency with which this task is solved in nature have so far not been achieved by engineered systems. The main limiting factors are slow frame-based cameras on the one hand, and computationally costly data-processing on sequential computers on the other hand. Here, we want to present a method to circumvent both by utilizing novel, neuromorphic technologies: As substitute for frame based-cameras, we utilize an embedded dynamic vision sensor (DVS [1, 2]), in which pixels operate independently from each other and trigger an event as soon as they perceive a change in illumination. This asynchronous mode of operation results in a very high temporal resolution for single events (up to 1 us). This, in turn, leads to a considerably increased accuracy in the estimation of flow vectors compared to estimates using conventional frame-based cameras. At the same time, the data stream of the camera is sparse because only pixels that undergo an illumination change (e.g. due to moving objects) will send events, while pixels that see static background features remain silent. Secondly, we focus on developing a bio-inspired algorithm for optical flow computation which mimics recurrent lateral connectivity of the visual cortex V1 in mammals. This efficient algorithm only requires limited computing power, so that data processing can be done on board of small mobile robots with severely constrained battery life and processing power. The resulting optic flow map can be used to enable robots to perform tasks like obstacle avoidance or ego-motion estimation autonomously. Acknowledgements This work was funded by the German Federal Ministry of Education and Research (Grant 01GQ0441, BCCN-Munich Project C3). References 1 Lichtsteiner, P., Posch, C., Delbruck, T.: A 128x128 120db 15us latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid State Circuits (2007) 2 http://www.inilabs.com/support/edvs

[W 63] Encoding of object and background motion trajectories in the salamander retina Norma Krystyna Kühn1,2 , Tim Gollisch1,2 1. Department of Ophthalmology, University Medical Center Göttingen, Waldweg 33, 37073 Göttingen, Germany 2. Bernstein Center for Computational Neuroscience Göttingen, Germany doi: 10.12751/nncn.bc2015.0184

For catching a moving object, it is crucial to differentiate its motion trajectory from the motion of the background. This segregation already starts in the retina. Here, we study the encoding of object and background motion trajectories on the level of the retinal ganglion cells (RGCs), the output cells of the retina. We stimulate isolated salamander retina with moving visual stimuli and record the RGC responses with multielectrode 188

Acknowledgements This work was supported by the Dorothea Schlözer Programme of Göttingen University, the DFG (GO 1408/2-1 and SFB 889, C1) and by the European Commission (FP7-ICT-2011.9.11/600954, “VISUALISE”). References 1 Ölveczky et al. Nature 2003 10.1038/nature01652

[W 64] Detecting the Layout of Nonlinear Subunits in Receptive Fields of Retinal Ganglion Cells Jian K. Liu1,2 , Helene-Marianne Schreyer1,2 , Arno Onken3 , Mohammad Hossein Khani1,2 , Michael Weick1,2 , Stefano Panzeri3 , Tim Gollisch1,2 1. Department of Ophthalmology, University Medical Center Goettingen, 37073 Goettingen, Germany 2. Bernstein Center for Computational Neuroscience Goettingen, 37073 Goettingen, Germany 3. Laboratory of Neural Computation, Istituto Italiano di Tecnologia Rovereto, 38068 Rovereto, Italy doi: 10.12751/nncn.bc2015.0185

Retinal ganglion cells (RGCs) encode information about visual inputs that impinge on the eye in their spiking activity. To do so, the cells collect information across their receptive fields from multiple presynaptic neurons. Typically, individual RGCs receive many excitatory inputs from upstream bipolar cells, which are thought to constitute nonlinear subunits within the receptive field. To date, a systematic method for detecting these subunits without detailed measurements of the anatomical and physiological properties of the bipolar-ganglion cell circuitry is still lacking. Here, we explore a new methodology extending standard reverse-correlation techniques with semi-non-negative matrix factorization (semi-NMF). We start by applying spatio-temporal white-noise stimulation to a spiking RGC model with pre-defined subunits. We collect the spatial 189

.

arrays. We found three types of RGCs with specific responses to motion stimuli . First, standard direction-selective (DS) cells strongly respond to a certain direction of drifting gratings, but not when the stimulus is moving to the opposite direction. These cells are known from mouse and rabbit, but have not been characterized in salamander to date. Second, we found object-motion-sensitive (OMS) cells, which preferably respond to the differential motion of objects on a moving background, but not when the whole scene is moving coherently (Ölveczky et al. Nature 2003). Third, we found an RGC type that combines both functions. These OMS-DS cells respond preferably to a certain motion direction and prefer moving objects instead of global background motion. The standard DS and OMS-DS cells do not only differ in their preferences for object and background motion, but also in their organization of preferred directions. For standard DS cells the preferred directions occur in three clusters, separated by 120°. These are aligned with the semicircular canals of the vestibular system. The preferred directions of OMS-DS cells are aligned with the cardinal directions. This suggests an interesting analogy to the ON and ON-OFF DS cells in mammals. Further experiments confirm that the standard DS cells are involved in the detection of the motion direction of the whole visual scene while OMS-DS cells detect the direction of moving objects. This information could be used to extract the motion paths of objects and background independently of each other.

Posters Wednesday stimulus patterns that elicited spikes and apply the semi-NMF method to this spiketriggered stimulus ensemble. This approach allows us to recover the pre-defined subunits. The method is then applied to recorded spike responses of RGCs from isolated salamander and mouse retina under spatio-temporal white-noise stimulation. Here, we find that multiple subunits inside the ganglion cell receptive fields are revealed with clearly defined, localized spatial structure. This allows us to describe the layout of subunits across large populations of simultaneously recorded ganglion cells. Finally, by comparison with a standard linear-nonlinear model, based on the RGCs’ receptive fields, we show that taking the identified subunits into account leads to better predictions of the cells’ spiking responses. Acknowledgements This work was supported by the DFG (GO 1408/2-1 and SFB 889, C1) and by the European Commission (FP7-ICT-2011.9.11/600954, “VISUALISE”).

[W 65] Stochastic optimization as a tool for studying spatial integration in mouse retinal ganglion cells Luis Giordano Ramos Traslosheros Lopez1,2,3 , Michael Weick2,3 , Tim Gollisch2,3 1. International Max Planck Research School for Neurosciences, Grisebachstr. 5 D-37077 Göttingen, Germany 2. Department of Ophthalmology, DFG-SFB 889, University Medical Center Göttingen, Waldweg 33 D-37073 Göttingen, Germany 3. Bernstein Center for Computational Neuroscience Göttingen, Am Faßberg 17 D-37077 Göttingen, Germany doi: 10.12751/nncn.bc2015.0186

How a neuron integrates the incoming streams of information largely constrains its computational role. In the retina, ganglion cells carry out the final integration process over their receptive fields. We here study how different ganglion cells in the mouse retina integrate visual contrast over space. We do so by performing cell-attached recordings of individual ganglion cells in the isolated retina of the mouse and by using closed-loop control of visual stimulation in the receptive field center to identify sets of stimuli that evoke the same neuronal response (“iso-response stimuli”). With this method of iso-response measurements, we can assess nonlinearities of stimulus integration, preceding cell-intrinsic nonlinearities, such as resulting from the spike generation process. We aim at improving the method through simulation-based optimization. We test the performance of stochastic search algorithms under realistic scenarios, such as different types of nonlinearities that had previously been observed in similar experiments [1, 2]. The optimized design, thus, deals efficiently with temporal constraints intrinsic to cell-attached recordings, reducing the number of stimulations required to complete the search for iso-response stimuli. Acknowledgements Funding comes from the Research Program, Faculty of Medicine, University of Göttingen, the DFG (GO 1408/2-1 and SFB 889, C1) and the European Commission (FP7-ICT-2011.9.11, no 600954, “VISUALISE”). References 1 Bölinger D, Gollisch T (2012) Closed-loop measurements of iso-response stimuli reveal dynamic nonlinear stimulus integration in the retina. Neuron 73:333–346. 10.1016/j.neuron.2011.10.039. 2 Takeshita D, Gollisch T (2014) Nonlinear spatial integration in the receptive field surround of retinal ganglion cells. J Neurosci 34:7548-7561. 10.1523/JNEUROSCI.0413-14.2014.

190

[W 66]

Classifying retinal ganglion cells in the salamander retina

Fernando Rozenblit1,2 , Tim Gollisch1,2

The retina is a complex neural network, which breaks down the visual scene into its distinctive features such as local contrast, motion, and color [1]. Retinal ganglion cells (RGCs) form the output layer of this network, and a typical vertebrate retina shows more than 15 RGC types. They can be identified based on anatomical and physiological properties, and each type is expected to relay information about distinct visual features to specific areas in the brain. Separating these channels of visual information is crucial for understanding how the visual scene is encoded, and much effort is put into classifying RGCs. For the salamander, attempts based on the temporal filtering properties of RGCs were successful in separating coarse groups of RGC types. But surprisingly, only one type showed tiling (a mosaic arrangement) of its receptive fields [2,3]. Because tiling is considered a strong signature of single RGC types [4], we here ask whether a refined classification might yield tiling by further RGC types. We recorded the spiking activity from isolated axolotl retinas with a 252-electrode array and sorted the spikes offline. In a typical experiment, we simultaneously recorded from more than 200 RGCs in a single retina. The retina was presented with spatiotemporal white-noise, and the receptive fields of the RGCs were estimated via reverse correlation. We classified the RGCs in a single retina by a spectral clustering algorithm [5], based on the similarity between their receptive field sizes, temporal filtering properties and autocorrelation of the spike trains. We consistently found three OFF cell types that independently tile the retina, extending the findings of previous reports. Two of those types shared similar temporal dynamics, but differed in their receptive field sizes and autocorrelations. Our results suggest that tiling is a fundamental feature of ganglion cell types also in the salamander retina. Acknowledgements This work was supported by the DFG (GO 1408/2-1 and SFB 889, C1), and by the European Commission (FP7-ICT-2011.9.11/600954, “VISUALISE”). References 1 Masland, R. H. (2001). The fundamental plan of the retina. Nature Neuroscience, 4(9), 877–86 10.1038/nn0901-877 2 Segev, R., Puchalla, J., & Berry, M. J. (2006). Functional organization of ganglion cells in the salamander retina. Journal of Neurophysiology, 95(4), 2277–2292 10.1152/jn.00928.2005 3 Marre, O., Amodei, D., Deshmukh, N., Sadeghi, K., Soo, F., Holy, T. E., & Berry, M. J. (2012). Mapping a Complete Neural Population in the Retina. The Journal of Neuroscience, 32(43), 14859 –14873 10.1523/JNEUROSCI.0723-12.2012 4 DeVries, S. H., & Baylor, D. A. (1997). Mosaic arrangement of ganglion cell receptive fields in rabbit retina. Journal of Neurophysiology, 78(4), 2048–2060 5 Ng, A. Y., Jordan, M. I., & Weiss, Y. (2002). On Spectral Clustering: Analysis and an Algorithm. In T. G. Dietterich, S. Becker, & Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems 14 (pp. 849–856). MIT Press.

191

.

1. Department of Ophthalmology, University Medical Center Göttingen, Waldweg 33, 37073 Göttingen, Germany 2. Bernstein Center for Computational Neuroscience Göttingen, Germany doi: 10.12751/nncn.bc2015.0187

Posters Wednesday

[W 67] Postnatal development of spontaneous activity in rat somatosensory cortex Thomas Fucke1,2,3 , Joachim Hass2,3 , Christian Schmitz2 , Sven Berberich2 , Thomas Hahn1,2,3 1. Max-Planck-Institute for Medical Research, Jahnstr. 29, 69120 Heidelberg, Germany 2. Central Institute of Mental Health, Medical Faculty Mannheim / Heidelberg University, J5, 68159 Mannheim, Germany 3. Bernstein Center for Computational Neuroscience, Heidelberg/Mannheim, J5, 68159 Mannheim, Germany doi: 10.12751/nncn.bc2015.0188

During postnatal development, organisms have to learn new tasks and accordingly restructure their neuronal circuits. This restructuring is seen in neuronal activity not only during task performance, but also in spontaneous activity, possibly as a signature to the formation of internal models. Here, we performed whole-cell patch-clamp experiments in vivo and measured spontaneous membrane potential fluctuations in layer 2/3 pyramidal cells in the somatosensory cortex of anesthetized rats at different early postnatal ages, starting at postnatal day 5 (P5) up to P30.Up/down-states were present in cells at all ages. However, before P12, up-states are relatively sparse and occur only irregularly with small amplitudes of around 5-8 mV. After P15, the membrane potential shows more regular oscillations in the 1 Hz regime and amplitudes around 15 mV. This might be due to a non-linear change in converging connectivity onto these cells. To test this hypothesis, we are utilizing two data-driven age-dependent models (a detailed biophysical model and an adaptive exponential integrate-and-fire model) to estimate synaptic activation patterns that best explain the observed spontaneous activity patterns. Acknowledgements This work was funded by grants from the German ministry for education and research (BMBF, 01GQ1003B) and the Max-Planck-Society.

[W 68]

Dynamic binaural synthesis in cochlear-implant research

Florian Völk1,2 , Werner Hemmert1 1. Bio-Inspired Information Processing, IMETUM, Technische Universität München, Boltzmannstraße 11, 85748 Garching, Germany 2. WindAcoustics UG (haftungsbeschränkt), Mühlbachstraße 1, 86949 Windach, Germany doi: 10.12751/nncn.bc2015.0189

The ability of normal-hearing subjects to differentiate directional information extracted from vocoded stimuli may be considered as an indication for the discriminative directional information potentially present in cochlear-implant (CI) listening. In a pilot study with dynamic binaural synthesis, we addressed the angle formed with respect to the listener’s head by two sound sources perceptually just differentiable in position when sounded in succession (minimum-audible angle, MAA). MAAs were measured with two different sets of analysis-channel configurations with 6 respectively 8 normal-hearing subjects (22 to 31 years). The synthesis-output signals were passed through symmetric but inter-aurally uncorrelated vocoder systems before being presented to the listeners. Looking at the results, analysis of variance (ANOVA) indicates a significant main effect of the channel configuration for both sets [F(5,45)=2.49; p=0.0447] and [F(7,63)=2.51; p=0.0243]. Informal reports and empirical listening points towards a tendency for up to three 192

hearing sensations to arise simultaneously during the experiment, one at the intended position and two at the listeners’ ears; presumably due to unrealistically low interaural correlation. The MAAs in general appear plausible compared to the situation without vocoder and to results reported for CI listening (between 3° and 8°). However, in none of the vocoded conditions MAAs were comparable to those of the normal-hearing situation. The results indicate no clear dependency of the MAA on the number of analysis channels, at which a tendency is visible of slightly increasing angles for more than six channels. Summarizing, the described pilot study shows the applicability and potential benefit of dynamic binaural synthesis applied to research in audiology: providing more realistic stimuli than most conventional procedures, otherwise hidden phenomena can be revealed and studied under controlled but realistic acoustic conditions.

[W 69] Discrimination of visual textures in recurrent network models of visual cortex 1. Frankfurt Institute for Advanced Studies and Johann Wolfgang Goethe University, RuthMoufang-Straße 1, 60438 Frankfurt am Main, Germany 2. Frankfurt Institute for Advanced Studies and Johann Wolfgang Goethe University, RuthMoufang-Straße 1, 60438 Frankfurt am Main, Germany 3. Frankfurt Institute for Advanced Studies and Johann Wolfgang Goethe University, RuthMoufang-Straße 1, 60438 Frankfurt am Main, Germany doi: 10.12751/nncn.bc2015.0190

The functional architecture of the visual cortex displays marked differences across mammalian species: in stark contrast to primates, in which the preferred stimulus orientation forms an almost smooth map across the cortical surface, in rodents a ‘salt-andpepper’organization has been observed [1]. It is conceivable that the layout of preferred orientations can affect the processing of visual input. Previously [2], by analyzing a biologically inspired object recognition system with a feedforward network architecture, we have found that a smooth map outperforms a salt-and-pepper organization in certain texture recognition tasks. In this work, we explore the impact of recurrent connections on the discrimination of visual textures. Our model is a single-layer rate network. Feedforward inputs reflect a predefined spatial layout of orientation preferences. Inspired by [3], recurrent connections between two neurons depend both on their spatial distance and difference in preferred orientation. We design a network for the salt-and-pepper organization in an analogous way. To explore the influence of recurrent connections on orientation selectivity we vary the strength of the lateral and the selectivity of feed-forward connections. Driving the network with oriented gratings, we observe a sharpening of the orientation tuning with increasing strength of recurrent connections. We also compare both architectures in terms of their performance in orientation discrimination. To this end we train a linear classifier on the network responses to discriminate between two gratings of different angles and compare the classification performance as a function of angle difference. We expect that our study will shed light on the role of recurrent connections in texture and orientation discrimination. Acknowledgements This work was supported by BMBF, project 01GQ0840 (BFNT Frankfurt).

193

.

Hanna Kamyshanska1 , Dmitry Bibichkov2 , Matthias Kaschube3

Posters Wednesday References 1 Kaschube M. Neural maps versus salt-and-pepper organization in visual cortex. Curr Opin Neurobiol 2014, 24:95–102. 2 Bauer F., Kaschube M. Processing textures in a smooth visual map and a salt-and-pepper organization. Bernstein Conference 2013. 3 Blumenfeld B., Bibitchkov D., & Tsodyks M. Neural network model of the primary visual cortex: From functional architecture to lateral connectivity and back. J Comput Neurosci. 2006, 20(2), 219-241.

[W 70] Modeling Monaural Coincidence Detection in the Lateral Superior Olive Go Ashida1 , Daniel J Tollin2 , Jutta Kretzberg1 1. Department for Neuroscience, University of Oldenburg, 26129 Oldenburg, Germany 2. Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80010, USA doi: 10.12751/nncn.bc2015.0191

Coincidence detection is one of the most fundamental operations for neuronal information processing. Many auditory neurons encode or decode temporal information of sounds by detecting coincident arrivals of synaptic inputs. Neurons in the mammalian lateral superior olive (LSO) detect sound intensity differences between the two ears by comparing excitatory and inhibitory inputs driven respectively by ipsilateral and contralateral sounds. Binaural coding in the LSO has been extensively studied both theoretically and experimentally, but, in contrast, monaural response properties of LSO neurons are only poorly understood. Previous in vivo recordings using amplitude-modulated (AM) sounds [1] showed that spike rates of LSO neurons generally decrease with increasing modulation frequency but that the variation across neurons was considerably large. To reveal the underlying mechanisms of the observed frequency-dependence and variability, we used a simple coincidence counting model that has only a small number of parameters [2] and investigated how each biophysical factor might affect AM coding in the LSO. Our simulations showed: 1) Frequency-dependence of input parameters (spike rates and degrees of phase-locking) had only limited effects on LSO output spike rates; 2) Increasing the coincidence threshold or decreasing the length of the coincidence window reduced output spike rates; 3) Changing the coincidence threshold shifted half-peak positions of AM-tuning curves; and 4) The duration of the refractory period affected only the low-frequency part of the AM-tuning curve. The observed variations in modulationfrequency dependence across neurons may thus reflect the variations of the coincidence detection parameters we examined, suggesting that the considerable inhomogeneity of LSO neurons might be essential for sound intensity coding. The minimalistic model we used in this study would also be useful for studying neuronal coincidence detection more generally in other systems. Acknowledgements Supported by the Cluster of Excellence "Hearing4all" (GA, JK), by the NIH Grant DC011555 (DJT), and by a Hanse-Wissenschaftskolleg (HWK) Fellowship (DJT). References 1 Joris PX, Yin TC T (1998) Envelope coding in the lateral superior olive. III. Comparison with afferent pathways. J Neurophysiol 79:253–269. 2 Franken TP, Bremen P, Joris PX (2014) Coincidence detection in the medial superior olive: mechanistic implications of an analysis of input spiking patterns. Front Neural Circuits 8: 42. 10.3389/fncir.2014.00042

194

[W 71] Neural representation of distinct sensory modalities in the mouse somatosensory cortex Sanjeev Kumar Kaushalya1 , Claudio Sebastian Quiroga-Lombard2 , Daniel Durstewitz2,3 , Rohini Kuner1

Techniques for recording simultaneously from large neural populations have opened up a completely new potential for understanding details of neural representations and dynamics. Specifically in-vivo calcium imaging techniques are among those which enable to track and examine the activity of large neuronal populations with single-cell selectivity. Although considerable progress has been made in the study of sensory representations, within the somatosensory domain it is still an open question how diverse modalities such as heat, cold and mechanical pressure are encoded and differentiated at the ensemble level. Here we used mice transduced with genetically encoded calcium indicator (GCaMP6s) and perform in-vivo multi-photon recordings at the hind-limb S1 cortex (SSHL) to assess neuronal population responses to cold, heat and mechanical pressure. Our results reveal a very sparse neuronal representation of these modalities, where less than 10% of the neurons significantly respond to a stimulus. We also observed a high degree of mixed-selectivity, with single neurons responding to different modalities in particular for cold and heat. These analyses in general speak for a population code even at this primary sensory stage, but still suggest that temperature and mechanical modalities also have a somewhat segregated representation in S1 cortex. References 1 Barretto, R. P., Gillis-Smith, S., Chandrashekar, J., Yarmolinsky, D. A., Schnitzer, M. J., Ryba, N. J., & Zuker, C. S. (2015). The neural representation of taste quality at the periphery. Nature, 517(7534), 373-376. 2 Omori S, Isose S, Otsuru N, Nishihara M, Kuwabara S, Inui K, Kakigi R. Somatotopic representation of pain in the primary somatosensory cortex (S1) in humans. Clin Neurophysiol. 2013 Jul;124(7):1422-30.

[W 72]

Visual tuning properties of tectal neurons in zebrafish

Katharina Bergmann1 , Paola Meza Santoscoy1 , Vincent T Cunliffe1 , Anton Nikolaev1 1. Department of Biomedical Science, University of Sheffield, Firth Court, Western Bank, Sheffield, S10 2TN, United Kingdom doi: 10.12751/nncn.bc2015.0193

A central question in visual neuroscience is to understand the circuits underlying the shape tuning properties of neurons. Zebrafish, which are small, transparent, and easily genetically modified offer an ideal model to study these circuits. Previous research on zebrafish has revealed that retino-tectal circuits display basic tuning properties, such as direction-, orientation- and size-selectivity [1,2]. Tectal tuning to more complex visual features such as angularity and curvatures however, has not been tested yet. To achieve this, we are using larval zebrafish (15dpf) which panneuronally express GCaMP3. The fish are placed in a custom-made chamber, onto which a set of visual stimuli is projected (angles, curvatures and geometrical shapes). Simultaneously, neuronal activity 195

.

1. Institute of Pharmacology, Heidelberg University, Heidelberg, Germany 2. Department of Theoretical Neuroscience, Bernstein Center for Computational Neuroscience, Central Institute of Mental Health, Mannheim, Germany 3. School of Computing and Mathematics, Faculty of Science and Environment, Plymouth University, Plymouth, UK doi: 10.12751/nncn.bc2015.0192

Posters Wednesday in the optic tectum is monitored using confocal imaging. Preliminary results showed that moving images of angles and shapes reliably evoked activity in the superficial, medium and deeper layers of the optic tectum, in both cell bodies and the neuropil. Moreover, certain areas of the optic tectum responded differently to different stimuli. These findings suggest that visual neurons in the optic tectum display a complex tuning behaviour, beyond direction- and orientation-selectivity. Further analysis will provide information regarding the spatial organisation of selective activity and the difference in tuning between cell bodies and the neuropil. Acknowledgements This research is funded by the University of Sheffield and supported by The Royal Society. References 1 Nikolas Nikolaou, Andrew S. Lowe, Alison S. Walker, Fatima Abbas, Paul R. Hunter, Ian D. Thompson, Martin P. Meyer, Parametric Functional Maps of Visual Inputs to the Tectum, Neuron, Volume 76, Issue 2, 18 October 2012, Pages 317-324, ISSN 0896-6273 doi:10.1016/j.neuron.2012.08.040 2 Stephanie J. Preuss, Chintan A. Trivedi, Colette M. vom Berg-Maurer, Soojin Ryu, Johann H. Bollmann, Classification of Object Size in Retinotectal Microcircuits, Current Biology, Volume 24, Issue 20, 20 October 2014, Pages 2376-2385, ISSN 0960-9822 doi:10.1016/j.cub.2014.09.012

[W 73] The origin of broad whisker touch receptive fields in a major output cell type of cortex Robert Egger1,2 , Christiaan P.J. de Kock3 , Rajeev T. Narayanan1 , Marcel Oberlaender1,4,5 1. Computational Neuroanatomy, Max Planck Institute for Biological Cybernetics, Tuebingen, Germany 2. Graduate School of Neural Information Processing, University of Tuebingen, Tuebingen, Germany 3. Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands 4. Digital Neuroanatomy, Max Planck Florida Institute for Neuroscience, Jupiter, FL, USA 5. Bernstein Center for Computational Neuroscience, Tuebingen, Germany doi: 10.12751/nncn.bc2015.0194

A fundamental challenge in neuroscience is to understand the cellular and circuit mechanisms underlying the receptive fields of neurons in sensory cortices. Despite steady progress in the analysis of cortical circuits, current models cannot explain the origin of broad receptive fields in L5 thick-tufted pyramidal neurons (L5tt), a major cortical output cell type. We therefore developed a reverse-engineering approach to create anatomically and functionally realistic models of neurons in rat vibrissal cortex (vS1). Based on in vivo receptive field measurements, 3D reconstruction of neuron morphologies, and integration of neurons into an average model of the circuitry of vS1, we constrained simulations to reveal the source of broad receptive fields of L5tt. First, recorded and reconstructed neurons were registered to their location in an average model of vS1 and we determined the number and subcellular distribution of thalamocortical and intracortical synaptic inputs to each neuron. Next, neurons were turned into biophysically detailed compartmental models. Finally, we activated presynaptic neurons based on spike probabilities measured in vivo. Without optimization of the experimentally constrained parameters, the simulated spiking responses of these models to biologically realistic spatiotemporal synaptic input patterns after touch of the principal whisker (PW) or different surround whiskers (SuW) matched in vivo measurements. We found that the response of L5tt is composed of two phases. The first phase is driven by input 196

from thalamus and L6 (PW deflection) or solely by L6 (SuW deflection), while the second phase reflects recurrent intracortical activity. This new model of the spread of sensory-evoked excitation in cortex explains previous contradicting observations and suggests cell type-specific computations in cortical circuits. For example, L6 neurons may act as differential input detectors, while L5tt integrate sensory input across time and space.

.

Acknowledgements Funding: Max Planck Florida Institute for Neuroscience, Studienstiftung des deutschen Volkes, BMBF/FKZ 01GQ1002, Max Planck Institute for Biological Cybernetics, VU University, Amsterdam

[W 74] Reward prediction errors refine sensory representations in a neural network model Raphael Holca-Lamarre1,2 , Jörg Lücke1,3 , Klaus Obermayer1,2 1. Fakultät IV, Technische Universität Berlin, Berlin, Germany 2. Bernstein Center for Computational Neuroscience, Berlin, Germany 3. Dept. of Medical Physics and Acoustics, Universität Oldenburg, Oldenburg, Germany doi: 10.12751/nncn.bc2015.0195

The ventral tegmental area (VTA) contains dopamine releasing neurons whose activity reflects reward prediction errors [1]. Activity in the VTA has a potent effect on cortical sensory representations: pairing stimulation of the VTA with presentation of an auditory tone, for instance, increases the representation area of this tone in the primary auditory cortex [2]. It is unclear why a signal related to reward prediction errors should have such an effect on cortical representations. Here, we use a neural network model to examine this question. We extend a model of synaptic plasticity and representational learning [3] to reproduce the effects of VTA activation on the network’s representation: in the model, as in animals, pairing stimulus presentation with VTA activation shifts the synaptic weights of neurons towards the paired stimulus. The network is subjected to a classification task and is rewarded for taking correct classification decisions. Additionally, at each trial, the network makes a prediction about the reward it expects to receive. The difference between the predicted and received reward makes up a reward prediction error; this error activates the VTA. We perform parameter exploration to determine the optimal VTA activation depending on the value of the reward prediction error. We find that the VTA activation profile that is optimal with respect to classification performance in the model matches the activation profile observed in animals. During training, VTA activation refines synaptic weights with respect to the classification task and significantly improves the network’s performance on the task. Our model therefore provides a well fitting explanation as to why reward prediction error signals affect sensory representations in animals. 197

Posters Wednesday

Modelling the effects of VTA activation in the network refines synaptic weights (A) and leads to large improvements in classification performances (B). Acknowledgements The authors wish to acknowledge the following grants: RHL, the Studienstiftung des deutschen Volkes and Quebec’s National Research Fund for Nature and Technologies [181120]; KO, the Graduiertenkolleg References 1 Schultz, W., Dayan, P. & Montague, P. R. A neural substrate of prediction and reward. Science 275, 1593–1599 (1997) 2 Bao, S., Chan, V. T. & Merzenich, M. M. Cortical remodelling induced by activity of ventral tegmental dopamine neurons. Nature 412, 79–83 (2001) 3 Keck, C., Savin, C. & Lücke, J. Feedforward Inhibition and Synaptic Scaling–Two Sides of the Same Coin? PLoS Comput. Biol. 8, e1002432 (2012)

[W 75] Pre-perceptual grouping of auditory scenes explains contextual biases in the perception of ambiguous tonal shifts. Vincent Adam1 , Claire Chambers2 , Claire Pelofi3 , Maneesh Sahani1 , Daniel Pressnitzer3 1. Gatsby Unit, UCL, Alexandra House, 17 Queen Square, WC1N 3AR, UK 2. Department of Psychology, University of Western Ontario, Canada 3. Département d’Etudes Cognitives, ENS, 45 Rue d’Ulm, 75005 Paris, France doi: 10.12751/nncn.bc2015.0196

Auditory Perception is influenced by recent sensory history. Perception of ambiguous tonal-shifts can be biased by a previous context (Chambers et al, submitted). In this study stimuli were Shepard Tones, i.e. octave-related components of a base frequency, with a Gaussian spectral envelope (Shepard. 64). When judging the direction of shift between two such complexes T1 and T2, listeners report the shorter path between components (distance in log frequency). When T2 components are equidistant from adjacent T1 components (half-octave interval), this cue is removed and listeners report either an upward or downward shift with equal probability. Biases are introduced by preceding ambiguous pairs with contextual complex tones (C). The pitch shift between T1 and T2 was strongly influenced by the frequency region of C, such that the shift encompassed the frequency region of C. We suggest that the biasing of the perceived shift of those shift-ambiguous pairs arises as a consequence of an underlying general pre-perceptual grouping mechanism where spectro-temporal components of an auditory scene are attributed to ongoing ’tracks’ based on spectro-temporal continuity. At the level of individual tracks, local frequency shifts can be extracted and combined to give rise to a global shift percept. This model solves a fundamental attribution problem: how to bind spectro-temporal components in a useful way given the statistics of natural 198

sounds. Applied to the artificial stimuli of the experiment, components of T1 and T2 are attributed to ongoing tracks built from the context. The position of these tracks before the pair biases the attribution of their components and hence the locally extracted frequency shifts. Past exposure mainly influences behaviour in ambiguous conditions which is the key condition to highlight this mechanism. Our model reproduces psychophysical results in both ambiguous and non-ambiguous cases and in both the presence or absence of context. References 1 R. Shepard (1964). JASA, 36(12):2346–2353 10.1121/1.1919362

Cláudio Eduardo Corrêa Teixeira1,2,3 , Camilla Cabral Ferreira3 , Anderson Raiol Rodrigues1 1. Núcleo de Medicina Tropical, Universidade Federal do Pará, Av. Generalíssimo Deodoro, 92, 66055-240, Umarizal, Belém, Pará, Brazil 2. Laboratório Multidisciplinar, Centro de Ensino Superior do Pará, Av. Almirante Barroso, 3775, 66613-903, Souza, Belém, Pará, Brazil 3. Centro de Ciências Biológicas e da Saúde, Universidade da Amazônia, Av. Alcindo Cacela, 287, 66060-902, Umarizal, Belém, Pará, Brazil doi: 10.12751/nncn.bc2015.0197

Assuming information as reduction of uncertainty, Integrated Information Theory (IIT) postulates that each mechanism of the neural activity might significantly reduce perceptual entropy (PE). This might allow information generation and organization in a unique, integrated and autonomously undivided whole, consciousness [1]. In this context, PE is a measure of how much neural activity is required for us experiences information as a conscious representation of the physical reality, i.e. energy patterns distributed globally and locally in spacetime. Thus, we used a psychophysical approach [2] to verify if PE is reduced by lateral interactions that modulate visual conscious experience of flicker. This would implicate that neural activity behind such psychophysical interactions (e.g. center-surround interactions at neurons’ receptive fields and lateral interactions between pools of neurons [3]) also fits to what IIT predicts. The perceived flicker strength (PFS) in a circular stimulus modulating at 3, 6 or 12 Hz was quantified when it was presented alone or simultaneously with surround stimuli that were presented in diverse conditions: static, modulating at 25 Hz, and modulating at the same frequencies of the test stimulus but in diverse temporal phases. The PFS was quantified monoptically, with participants (n = 6, 23 ± 2 yrs. old) using a two-alternative forced choice procedure to match the modulation depth of a stimulus identical to the test stimulus. The results show that mechanisms involved in psychophysical lateral interactions actively modulate PE (Fig. 1). We propose that the less all potential states of neural activity are elicited by center and surround stimuli, neural activity elicited by center signals assumes an entropy state higher than that assumed by neural activity elicited by surround signals. Therefore, less accurately we consciously experience the information associated to global and local states of the center stimulus. And vice versa (Fig. 2). 199

.

[W 76] EVIDENCING PERCEPTUAL ENTROPY MODULATION IN VISUAL CONSCIOUS EXPERIENCE

Posters Wednesday

Acknowledgements This work was supported by CNPq (Edital MCT/CNPq Nº 14/2012 – Universal) References 1 Tononi, G. (2012). Integrated information theory of consciousness: an updated account. Archives Italiennes de Biologie, 150: 290-326 DOI: 10.4449/aib.v149i5.1388 2 Teixeira, C. E. C.; Salomão, R. C.; Rodrigues, A. R.; Horn, F. K.; Silveira, L. C. L.; Kremers, J. (2014). Evidence for two types of lateral interactions in visual perception of temporal signals. Journal of Vision, 14(9): 10, 1–18. DOI: 10.1167/14.9.10 3 Kremers, J.; Kozyrev, V.; Silveira, L. C. L.; & Kilavik, B. E. (2004). Lateral interactions in the perception of flicker and in the physiology of the lateral geniculate nucleus. Journal of Vision, 4: 643-663. DOI: 10.1167/4.7.10

[W 77]

Sound Localization in Partially Updated Room Simulations

Samuel W. Clapp1 , Bernhard U. Seeber1 1. Audio Information Processing, Technical University of Munich, Theresienstr. 90, 80333 Munich, Germany doi: 10.12751/nncn.bc2015.0198

Room auralization systems have many applications in virtual reality and neuroscience research. These systems can simulate many different types of spaces in a controlled manner, allowing for the investigation of both low-level cognitive processes (such as localization or loudness perception) and higher-level ones (such as auditory scene analysis or stream segregation) in realistic acoustic environments. One of the present challenges is to extend these systems with real-time capabilities. The goal is to facilitate changes in the room simulation while the simulation is running, such as a moving source or receiver. The main challenges relate to the use of computing resources, as room simulations are computationally intensive, but need to be recalculated quickly enough in a real-time scenario in order to draw conclusions about perception in a similar real-life scenario. Here, in preparation for implementing real-time simulations, a study was conducted to examine how a partial update in the room simulation with a moving source affects sound localization. The study employed our Simulated Open Field Environment system that has been used extensively in psychoacoustic research. Room reflections were simulated using the image source method, where the source location is mirrored repeatedly across the boundaries of the room to determine the spatial position and timing of individual reflections, and where computation time increases with reflection order. Therefore, the goal of this study is to examine the effects on localization when: (1) image sources are recalculated for a new position up to a finite order and (2) higher-order image source locations are retained from the previous position. Current results from five listeners show that the required update order of image sources is highly influenced by 200

the source-receiver distance, with smaller distances being more robust to inaccuracies in the room simulation. Acknowledgements Funded by BMBF 01 GQ 1004B.

[W 78]

Perceptual adaptation to auditory binaural cues

Marko Takanen1 , Nelli Salminen2 , Bernhard Seeber1

Conscious perception relies on sensory input processed by the nuclei in the nervous system. Neuronal adaptation allows the nuclei to provide useful input affecting also our auditory perception. For instance, sound localization that relies on the binaural cues can be biased by an adaptor carrying specific localization information. Here, we investigated how the auditory system adapts to the binaural cues, i.e. to interaural time (ITD) and level (ILD) differences, using wide-band stimuli containing the natural ITD- and/or ILD cues that exist in free-field listening conditions. To this end, HRTF-, ITD-, and ILD targets were created using white-noise signals filtered with (modified) non-individual head-related transfer functions (HRTFs; horizontal angles between ±15° in 5° spacing) so that the ITD- and ILD targets contained the phase- and magnitude responses of the HRTFs, respectively. Psychometric functions for laterality were obtained in a spatial discrimination paradigm before and after exposure to an adaptor sequence. Three adaptation conditions were employed following Phillips & Hall (2005): One asymmetric with an ILD adaptor on one side and an ITD adaptor on the other, and two symmetric conditions with either ITD or ILD adaptors on both sides of the midline. The adaptors were presented in alternating sequences with directional cues corresponding to ±60°. Each condition had a specific effect on the psychometric functions. The asymmetric condition shifted the thresholds for all targets, potentially because the ILD adaptor affected the ILD channel tuned to that side. Both symmetric conditions reduced the slopes of the functions: ITD adaptors reduced the slope of the ITD target and this effect was pronounced with ILD adaptors and –targets. Although monaural effects may have contributed to the shift caused by the asymmetric condition, the results imply that both ITD- and ILD processing are prone to binaural adaptation, the latter perhaps more than the former. Acknowledgements Supported by BMBF 01 GQ 1004B. References 1 Phillips &Hall (2005) http://dx.doi.org/10.1016/j.heares.2004.11.001

201

.

1. Audio Information Processing, Technische Universität München, Theresienstr. 90, 80333 Munich, Germany 2. Department of Neuroscience and Biomedical Engineering, Aalto University, Otakaari 3J, 02150 Espoo, Finland doi: 10.12751/nncn.bc2015.0199

Posters Wednesday

[W 79] An Onset Enhancement Algorithm for Improving ITD-Based Source Localization by Bilateral Cochlear Implant Users in Reverberant Conditions Aswin A Wijetillake1 , Bernhard U Seeber1 1. Fachgebiet Audio-Signalverarbeitung, Technische Univerität München, Theresienstr. 90, München 80333, Deutschland doi: 10.12751/nncn.bc2015.0200

Interaural timing difference (ITD) cues can often provide benefits to unimpaired listeners in reverberant space. However, similar benefits are typically unforthcoming to bilateral cochlear implant (BiCI) users. BiCI users are relatively insensitive to ITDs in the temporal fine-structure of a signal but can benefit from cues in the onsets and the slowly-fluctuating temporal envelope (Kerber & Seeber, 2013); particularly when onsets are sharp and fluctuations are deep. Reverberation can, however, reduce envelope sharpness and depth, and hence ITD salience. The current study evaluates a novel onset enhancement algorithm that selectively sharpens and deepens onsets of peaks in the signal envelopes in a standard ‘CIS’ sound coding strategy. The algorithm uses the short-time direct-to-reverberant ratio (DRR) to enhance only the peaks that are dominated by the direct signal rather than reflections (Monaghan & Seeber, 2011). The algorithm’s efficacy in improving ITD sensitivity to the direct sound in reverberant space, and its impact on speech comprehension, was tested with an intracranial lateralization task and an Oldenburg sentence test respectively. Both tests employed speech stimuli, with a range of DRRs, presented via direct stimulation. Reverberant space was simulated by convolving anechoic speech stimuli with binaural room impulse responses with DRR controlled by varying the source-receiver distance. ITDs were applied to the direct signal, after its interaural level difference (ILD) was set to 0dB, without altering the reflections. This ensured that outcomes are not confounded by ILDs in the direct signal or by perceived positional shifts of reflections. Evaluations using vocoders with unimpaired listeners indicated that the algorithm can significantly improve ITD sensitivity for DRRs as low as -3.6dB, without degrading speech comprehension. Data collection with BiCI users is currently ongoing, the outcomes of which will be discussed in this presentation. Acknowledgements This study is supported by BMBF 01 GQ 1004B References 1 Kerber, S., and Seeber, B. U. (2013). "Localization in reverberation with cochlear implants: predicting performance from basic psychophysical measures," J Assoc Res Otolaryngol 14, 379-392. 2 Monaghan, J. J. M., and Seeber, B. U. (2011). "Exploring the benefit from enhancing envelope ITDs for listening in reverberant environments," in Int. Conf. on Implantable Auditory Prostheses (Asilomar, CA)

202

[W 80]

A visual pathway for looming-evoked escape in larval zebrafish

Incinur Temizer1 , Joseph C. Donovan1,2 , Herwig Baier1 , Julia L. Semmelhack1

Avoiding the strike of an approaching predator requires rapid visual detection of a looming object, followed by a directed escape maneuver. While looming-sensitive neurons have been discovered in various animal species, the relative importance of stimulus features that are extracted by the visual system is still unclear. Furthermore, the neural mechanisms that compute object approach are largely unknown. We found that a virtual looming stimulus, i.e., a dark expanding disk on a bright background, reliably evoked rapid escape movements. Related stimuli, such as dimming, receding, or bright looming objects, were substantially less effective, and angular size was a critical determinant of escape initiation. Two-photon calcium imaging in retinal ganglion cell (RGC) axons revealed three retinorecipient areas that responded robustly to looming stimuli. One of these areas, the optic tectum is innervated by a subset of RGC axons that respond selectively to looming stimuli. Laser-induced lesions of the tectal neuropil impaired the behavior. Our findings demonstrate a visually mediated escape behavior in zebrafish larvae exposed to objects approaching on a collision course. This response is sensitive to spatiotemporal parameters of the looming stimulus. Our data indicate that a subset of RGC axons within the tectum responds selectively to features of looming stimuli, and that this input is necessary for visually-evoked escape. Acknowledgements Funding was provided by the Max Planck Society and the DFG. I.T. was supported by a Boehringer Ingelheim Fonds PhD Fellowship. J.L.S. was supported by a Helen Hay Whitney Postdoctoral Fellowship.

[W 81] Encoding of high frequency signals in single neurons of the electrosensory system of Apteronotus leptorhynchus Jan Benda1 , Jan Grewe1 , Fabian Sinz2 1. Institute for Neurobiology, Eberhard Karls Universität Tübingen, Auf der Morgenstelle 28 E, 72076 Tübingen, Germany 2. Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Suite S553, Houston, 77030 Texas, USA doi: 10.12751/nncn.bc2015.0202

Theoretical work on integrate-and-fire neurons has shown that stimulus frequencies that are much higher than the average firing rate are only weakly encoded in the evoked spike train. Recent behavioral data of the weakly electric fish Apteronotus demonstrated that behaviorally relevant electrocommunication signals could result in stimulus frequencies of several hundred Hertz. These animals are nocturnal hunters that rely on a self-generated, oscillating electric field for prey detection, navigation, and communication. Cutaneous electroreceptors of the active electrosensory system are tuned to the oscillatory frequency of the electric organ discharge (EOD) and encode amplitude modulations of the field in their firing rate. 203

.

1. Genes-Circuits-Behavior, Max Planck Institute for Neurobiology, Am Klopferspitz 18, Germany 2. Program in Neuroscience, University of California at San Francisco, San Francisco, CA 94143, USA doi: 10.12751/nncn.bc2015.0201

Posters Wednesday Here we present recordings from electroreceptor afferents that were stimulated by simulated oscillatory fields of other individuals or even other species with frequencies several hundreds Hertz below the fish’s own EOD frequency. We found that P-unit electroreceptors show phase locking not only to the frequency of the own field, but also to those of foreign fish whose EOD frequencies are far off the optimal tuning of the recorded fish’s receptors. This means that information about the presence of a foreign fish is encoded in these afferents. By means of an integrate-and-fire model that faithfully reproduces P-unit spiking activity we investigate the mechanisms and requirements for this unexpected response. Our electrophysiological data demonstrate that encoding of high frequencies is possible and of behavioral relevance in real neurons. In addition our results potentially indicate a so far neglected coding regime of the electrosensory system that also challenges the idea of a private communication channel of wave-type electric fish.

[W 82] Differential drive of hippocampal place cells by visual and proprioceptive input Olivia Haas1,2,3 , Josephine Henke1,2,3 , Christian Leibold1,2,3 , Kay Thurley1,2 1. Department Biology II, Ludwig-Maximilians-Universität, München, Germany 2. Bernstein Center for Computational Neuroscience, München, Germany 3. Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität, München, Germany doi: 10.12751/nncn.bc2015.0203

Hippocampal place cell activity represents an animal’s location within its environment and is therefore crucial for successful spatial orientation and navigation. To accomplish such a representation, the hippocampus integrates signals of multiple sensory modalities. However, it is not clear how these different sensory inputs contribute to place field formation. To determine the relative impact of visual and proprioceptive sensory information on place cell firing, we performed in-vivo extracellular recordings of small neuronal populations in the hippocampus of behaving Mongolian gerbils (Meriones unguiculatus). The recordings were done in a virtual reality setup, which enabled us to introduce online changes to the spatial environment. By altering the proportionality factor (gain factor) between the movement of the animal and the speed of the visual projection on a single trial basis, both visual and proprioceptive sensory inputs were put into conflict. We find two populations of place cells in hippocampal subfields CA1 and CA3. One which forms its place field based on visual input within the virtual linear track, and a second which relies predominantly on proprioception. Both classes of place cells dynamically adjusted their firing on a single run basis depending on the proportionality factor between own movement and visual flow. These results suggest that the mechanism combining sensory information into place field firing in the hippocampus acts on a second or even millisecond time scale. This indicates that there are different hippocampal microcircuits involved in processing sensory information from distinct sensory modalities. Acknowledgements This work was funded by the BMBF (Federal Ministry of Education and Research, Germany) via the BCCN Munich (01GQ1004A).

204

[W 83] Temporal estimation via intracortical microstimulation for goal-directed actions Marianna Semprini1 , Fabio Boi1 , Matteo Falappa1,2 , Ilaria Cosentini1,2 , Edoardo Balzani2 , Valter Tucci2 , Alessandro Vato1

Time perception in the temporal scale of seconds is crucial for many behaviors. Recent studies suggest that prefrontal cortex is involved in time estimation alongside with goal-directed decision, but the neural mechanisms underlining such cognitive processes remain mostly unexplored. To reduce this knowledge gap we designed an experiment in which we trained rats to estimate the correct timing for a goal-directed action by means of intracortical microstimulation (ICMS). Animals were implanted with two arrays of microelectrodes: one placed in the somatosensory barrel cortex and one in the prefrontal cortex. The experiments took place in an operant conditioning chamber equipped with a press lever and a central hole for nose poking and food withdrawal. To initiate a trial the animal had to nose poke the central hole and this triggered an immediate ICMS delivery to the somatosensory cortex. The stimulation indicated the temporal window within which a press lever would produce a food reward: the frequency of electrical pulses delivered to the cortex could be either low (10Hz) or high (80Hz) corresponding to reward availability in the temporal window of 10-20s or 40-60s post-stimulation respectively. As performance reached the steady state, rats switched to a second experimental phase were probe trials with an intermediate stimulation frequency (45Hz) were also presented. While performing the task spiking activity was recorded from the prefrontal cortex of the rat. This experimental framework allows the investigation of time estimation as well as the characterization of the activity of prefrontal cortex neurons during a temporal decision task. Moreover it allows to explore whether rats are able to infer temporal information from ICMS. References 1 Xu et al. 2013 10.1073/pnas.1321314111 2 Maimon and Assad 2006 doi:10.1038/nn1716

[W 84]

Retinal Image Simulation, a Basis for Visual Perception

Catarina Dias1 , Katharina Rifai1 , Siegfried Wahl1 1. Institute for Ophthalmic Research, University of Tuebingen, Roentgenweg 11, 72076 Tuebingen, Germany doi: 10.12751/nncn.bc2015.0205

Human perception is based on the processing of the image that reaches the retina. However, many image-based vision perception models do not take into account the influence of the eye on the retinal image. We simulate the retinal image of a 3D scene, affected by individual differences of human eyes, as well as external influences, such as spectacles. To this end, we use PBRT [1], a ray tracing based rendering tool, that relies on a 3D representation of a virtual scene. We developed an extension to PBRT in order to render the retinal image of a scene, subject to the influence of schematic eye models 205

.

1. Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, Corso Bettini 31, 38068 Rovereto, Italy 2. Neuroscience and Brain Technologies, Istituto Italiano di Tecnologia, via Morego 30, 16163 Genova, Italy doi: 10.12751/nncn.bc2015.0204

Posters Wednesday and the shape of the retina. An eye model is a theoretical representation of an eye, in which the eye’s refractive structures are described as lenses. With the developed tool it is possible to represent any eye model consisting of rotationally symmetric surfaces with conicoid shape (ellipsoids, paraboloids and hyperboloids). Here, we focus on a wide angle (70°) schematic eye model [2], optimised for anatomic accuracy and for a good agreement with the human eye’s optical aberrations. In addition, it admits the adjustment of the accommodation level. This enables us to describe the retinal image of human eyes, and to compare its features centrally and peripherally, by taking into account the irradiance distribution on the retina as well as image quality aspects. Our tool gives a new insight into the properties of the retinal image, especially regarding the periphery. The simulation of the retinal image can be used as a more precise input than photographs for image-based vision perception models, because it does not suffer from artificial distortions due to a camera lens system and reproduces the correct distribution of irradiance on the retina. References 1 Matt Pharr , Greg Humphreys, "Physically Based Rendering, Second Edition: From Theory To Implementation", Morgan Kaufmann Publishers Inc., San Francisco, CA, 2010 ISBN 978-0123750792 2 R. Navarro, J. Santamaría, and J. Bescós, "Accommodation-dependent model of the human eye with aspherics," J. Opt. Soc. Am. A 2, 1273-1280 (1985) 10.1364/JOSAA.2.001273

[W 85] Spontaneous emergence of structured responses in a random neural network in-vitro Manuel Schottdorf1,2,3 , Hecke Schrobsdorff1,2 , Walter Stühmer1,3 , Fred Wolf1,2 1. Bernstein Center for Computational Neuroscience, Göttingen, Germany 2. Nonliner Dynamics, MPI for Dynamics and Self-Organization, Am Fassberg 17, Göttingen, Germany 3. Molecular Biology of Neuronal Signals, MPI of Experimental Medicine, Hermann-Rein-Strasse 3, Göttingen, Germany doi: 10.12751/nncn.bc2015.0206

Neural networks with connections organized by probabilistic rules are conceptually powerful model systems. Among others, random neural networks have been shown to (1) generically exhibit computationally favorable properties for stimulus representation and information processing, (e.g. Lukoševičius & H. Jaeger, 2009), (2) dynamically generate a state of irregular spiking activity (van Vreeswijk & Sompolinsky, 1996) and (3) to account for visual cortical orientation selectivity (Ernst et al., 2001). What is left open in these theoretical studies is the questions whether such ideas are viable in random networks of living cells. We address this problem using a dissociated culture of rat cortical neurons. The neuronal connection patterns in such cultures are substantially less organized than neural circuits in the brain. We then drive these neurons optogenetically with spatially complex light patterns, generated by a holographic photostimulation system (Golan et al. 2009) and monitor neural responses with a multielectrode array. Stimulating the cell culture with moving gratings reveals a substantial degree of orientation tuning. We probe this orientation tuning and its origin by applying various stimulation conditions, i.e. varying spatial and temporal frequencies of the grating, and by interfering pharmacologically with the network. The orientation selectivity described here resembles to some extend cortical orientation selectivity. 206

Acknowledgements We would like to acknowledge helpful support from Andreas Neef (MPI-DS), Gerd Rapp and Oliver Wendt from Rapp OptoElectronic, and Shy Shoham from the Technion, Israel References 1 M. Lukoševičius & H. Jaeger: "Reservoir Computing Approaches to Recurrent Neural Network Training“, Computer Science Review 3(3): 127-149 (2009) 2 C. van Vreeswijk & H. Sompolinsky: "Chaos in neuronal networks with balanced excitatory and inhibitory activity“, Science 274(5293): 1724-1726 (1996) 3 U. Ernst, K. Pawelzik, C. Sahar-Pikielny & M. Tsodyks: "Intracortical origin of visual maps", Nature Neuroscience 4: 431-436 (2001) 4 L. Golan, I. Reutsky, N. Farah & W. Shoham: "Design and characteristics of holographic neural photo-stimulation systems“, Journal of Neural Engineering 6: 066004 (2009)

[W 86] Lower Leg Orthosis with Observation Learning for Individual Tuning allows Quasi-Continuous Gait Control 1. III. Physical Institute – Biophysics, Georg-August University, Göttingen, Germany 2. Bernstein Focus Neurotechnology Göttingen, Georg-August University, Göttingen, Germany 3. Bernstein Center for Computational Neuroscience, Georg-August University, Göttingen, Germany 4. CBR Embodied AI&Neurorobotics Lab, The Maesk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark doi: 10.12751/nncn.bc2015.0207

Lower-extremity supportive devices are prescribed to support, correct and assist the patient’s movement. Traditional control methods, like finite state machines, allow sophisticated control. But this comes at the expense of complexity due to the growth in the number of states and the transitions between these states. Modern hybrid controllers, introduce model based gait tracking to coordinate sets of finite state controllers, thus extending the possible motions [1,2]. But, the definition of state-transitions often include thresholds and conditions, which define discrete switching points of the controller, limiting when control is applied. Patients must have specific abilities to satisfy all of these conditions, which renders some controllers inoperable for them. Furthermore, this design hinders adaptation to changes in the patient’s abilities or muscle fatigue, e.g., the fatigue pushes the transition condition out of range. To overcome the restrictions imposed by discrete switching points and transition conditions, we propose a controller, which continuously tracks the device’s trajectory and thereby fits to the very individual gait. We implement internal models which transform the trajectory to a linear measure of gait progress. By the use of artificial neural networks to enable (re-)training with gait samples, the system may adapt to the patient’s behaviour at almost any time. The proposed controller switches between a set of these specialised gait controllers to cope with more gaits and environments. The presented study shows, that the control quality for continuous trajectory tracking depends on the linearity of the gait progress transformation. We conclude, that gait trajectory tracking applies control smoothly and almost continuous. Thereby, it can be fit to support individual gait, independent of variations like speed or stride length. The gait switching extends this high control resolution for complex environments and enhances the patient’s independence. Acknowledgements This research was supported by the BMBF-funded BFNT and BCCN Göttingen with grant numbers 01GQ0810 (project 3A) and 01GQ1005A (project D1), respectively, and the Emmy Noether Program (DFG,MA4464/3-1).

207

.

Jan-Matthias Braun1,2 , Poramate Manoonpong2,3,4 , Florentin Wörgötter1,2,3

Posters Wednesday References 1 H. Varol, F. Sup and M. Goldfarb, “Real-time gait mode intent recognition of a powered knee and ankle prosthesis for standing and walking”, 2nd IEEE RAS EMBS International Conference on Biomedical Robotics and Biomechatronics, 2008, 66-72. 10.1109/BIOROB.2008.4762860 2 Sup, F.; Varol, H. & Goldfarb, M. Upslope Walking With a Powered Knee and Ankle Prosthesis: Initial Results With an Amputee Subject Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 2011, 19, 71-78 10.1109/TNSRE.2010.2087360

[W 87] Impairment of evoked gamma range oscillations in schizophrenia: a modelling study Christoph Metzner1 , Achim Schweikard1 , Bartosz Zurowski2 1. Institute for Robotics and Cognitive Systems, University of Luebeck, Ratzeburger Allee 160, 23552 Luebeck, Gemany 2. Center for Integrative Psychiatry, University of Luebeck, Ratzeburger Allee 160, 23552 Luebeck, Germany doi: 10.12751/nncn.bc2015.0208

Multiple cortical circuit abnormalities have been characterized in schizophrenia (1), however their actual consequences remain poorly understood. MEG/EEG click entrainment experiments showed decreased power at 40 Hz but no alterations at 30 Hz in patients vs. controls (3,4). Here we investigated the impact of abnormalities on oscillatory activity by simulating entrainment in a model of the primary auditory cortex (2). The model was driven at 40 Hz (gamma range) and as a control at 30 Hz. Similar to previous approaches (5), however, focusing on evoked rather than spontaneous activity, we explored the effects of (a) connectivity disturbances (reduction of I-to-E and of I-to-I connections), (b) prolonged GABAergic decay time constant (at exc. and inh. cells), and (c) reduced inhibitory output (at exc. and inh. cells) on oscillatory power, in a search with 2025 combinations (PCs). The control model entrained to both driving frequencies (Fig (a)) in agreement with experiments (3,4). The PCs producing the strongest reduction of power in the gamma component were characterized by a prolonged decay time at I-to-E synapses together with an intact I-to-E connectivity and a 50% reduction in I-to-I connectivity (Fig (b). However, these PCs also produced a marked reduction of power in the 30 Hz band, thus did not show a valid ’schizophrenic’ response. Of all the PCs showing valid 30 Hz responses, two regions in the parameter space could be identified: one characterized by a prolonged decay time at I-to-I synapses and increased I-to-I output, and one characterized by reduced I-to-I connectivity and increased I-to-I output (Fig (c) resp. (d)). Although these groups all produced strong gamma reduction, they showed very different dynamics (see Fig (e)-(l)). This extensive parameter search identified specific dynamics that can produce gamma entrainment deficits as seen in patients. The results narrow the space of possible mechanisms on the cellular/synaptic level. 208

. Power spectra and spike frequency histograms References 1 (1) 10.1093/schbul/sbn070 2 (2) 10.1186/1471-2202-14-S1-P23 3 (3) 10.1001/archpsyc.56.11.1001 4 (4) 10.1152/jn.00870.2007 5 (5) 10.3389/neuro.09.033.2009

Motor processing [W 88] Fast-muscle contraction as a proxy to embodiment and BCI-control in Tetraplegia: An EEG study in immersive virtual reality. Giulia Rizza1,2 , Enea Francesco Pavone2,3 , Gaetano Tieri1,2 , Giuseppe Spinelli1,2 , Salvatore Maria Aglioti1,2 1. Department of Psychology, University of Rome “La Sapienza”, Via dei Marsi 78, Italy 2. Fondazione Santa Lucia, Via Ardeatina 306, Italy 3. Braintrends Ltd, Applied Neuroscience, Italy doi: 10.12751/nncn.bc2015.0209

Allowing humans, particularly when they suffer from somatosensory and motor disability (e.g. after spinal cord injury), to control through their brain artificial virtual or physical agents is a fundamental challenge for both neuroscience and engineering. A crucial process for achieving optimal control may be the induction of embodiment, i.e. the feeling that an artificial agent is part of our body (ownership) and we are responsible of its movement (agency). Combining EEG recording and immersion in a virtual environment (Cave System) we had demonstrated (Pavone et al, 2015) that observing, either in first (1pp) or third person perspective (3pp), wrong movements performed by an avatar activated the onlookers’ error monitoring brain systems. More specifically, subjective reports of higher embodiment paralleled higher Medial-Frontal Theta band 209

Posters Wednesday synchronization and larger Error Related Negativity deflection in reaction to the erroneous grasping of an avatar seen in 1pp condition. In the present study, we tested a tetraplegic patient and twelve healthy participants as control group in a modified version of the original paradigm task in which the request performs a short and rapid contraction (monitored through ElectroMiografic burst activation) of an axial muscle before passive observation movements of the avatar seen in 1pp. In the tetraplegic patient the contraction triggered the start of the avatar’s action and activated a cascade of events ultimately leading to increased sense of agency and ownership of the acting virtual arm differently from passive observation where sense of embodiment was lost. In the control group either passive observation and active control elicited high sense of embodiment. We confirmed that increase of embodiment paralleled Medial-Frontal Theta activity suggesting oscillations in this frequency band may represent a signature of embodiment that may be crucial for improving the flexibility of current brain computer interface devices. Acknowledgements The study was supported by the EU Project VERE, http://www.vereproject.org/ References 1 Pavone EF, Tieri G, Rizza G, Tidoni E, Grisoni L, Aglioti SM (2015). Embodying others’ in immersive virtual reality: electro-cortical signatures of monitoring the errors in the actions of an avatar seen from a first-person perspective. Under review

[W 89]

Connectomic analysis of the larval zebrafish spinal cord

Fabian Svara1 , Winfried Denk2 , Johann Bollmann1 1. Dept. of Biomedical Optics, Max Planck Institute for Medical Research, Jahnstr. 29, 69120 Heidelberg, Germany 2. Electrons - Photons - Neurons, Max Planck Institute for Neurobiology, Am Klopferspitz 18, 82152 Martinsried, Germany doi: 10.12751/nncn.bc2015.0210

The nervous system can generate different classes of motor patterns, giving rise to a rich repertoire of motor behaviors. Within a class, the motor pattern can be modulated on a graded scale, e.g. with respect to movement amplitude and frequency, which determines the speed of locomotion, or lateral bias in muscle recruitment, which leads to a change of direction. A fixed number of motoneurons in the spinal cord is available to mediate the generation and gradation of motor patterns. The set of motoneurons can be subdivided based on positional and functional criteria, suggesting the existence of subpools that are differentially recruited depending on the choice of motor pattern. Indeed, in the larval zebrafish spinal cord, a dorsoventral organization of motoneurons and interneurons exists that reflects the differential recruitment of these cell types during fast and slow swim speeds. More generally, spinal cord neurons are thought to form distinct microcircuits serving modular functions such as generation and modulation of rhythm, left-right alternation or antagonistic muscle recruitment. A comprehensive understanding, however, of how these microcircuits are activated and differentially recruit the motoneuron pool is lacking. Here, we use serial block-face electron microscopy to investigate connectomic principles in the spinal cord that may underlie the recruitment of distinct pools of motoneurons for controlling different motor patterns. Our approach reveals how the population of descending reticulospinal axons connects to primary and secondary motoneurons, which 210

[W 90] Correlation of EEG signals during simple and combined motor imageries Cecilia Lindig-León1,2,3 , Laurent Bougrain2,3 1. Neurosys Team, CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54500, France 2. Artificial Intelligence and Complex Systems, Artificial Intelligence and Complex Systems, Vandoeuvre- lès -Nancy, F-54506, France 3. Neurosys Team, Inria, Villers-lès-Nancy, F-54500, France doi: 10.12751/nncn.bc2015.0211

Imaginary motor tasks cause brain oscillations that can be detected over the primary sensorimotor cortex through the analysis of EEG recordings [1], [2]. However, processing in the human brain cannot be assumed as a model with localized regions, with one brain area corresponding to one brain function; nor as a rigid organization of unvarying cells. Contrarily, it is well known that brain function is characterized by its plasticity and connectivity. Some examples are given by brain tumor patients who were able to restore capabilities associated to a specific area removed by surgery [3]. Nevertheless, these aspects are not fully considered during the development of applications operated by brain activity, such as motor imagery-based brain-computer interfaces (BCI). Which consist of systems designed to identify the activity changes within one region over the sensorimotor cortex linked to one body part during the corresponding motor imagery (MI), without considering the potential connectivity among other regions. In the present study the correlation between electrodes during four different motor imageries (MIs), i.e. right hand; left hand; both hands and rest condition was computed in terms of R2 over the power spectrum of the EEG signals of six healthy subjects, within different frequency ranges, using 26 electrodes covering the primary sensorimotor cortex [4]. High R2 values indicate that there is a significant difference between the power spectrum computed over those regions, whereas small values show a correlation that suggests connectivity among the corresponding electrodes. Results show similar patterns for simple hand MIs and the combined MI in the contralateral side; as well as between the rest condition and simple hand MIs in the ipsilateral side (Fig. 1). This understanding allows considering additional information related to brain processing that can be used for feature extraction in order to conceive more robust commands for MI-based BCI control. 211

.

are thought to contribute to distinct motor behaviors. Furthermore, it allows us to identify new neuronal components within a microcircuit, e.g. that of the Mauthner cell escape network. An extended analysis of the connectivity of motoneurons with local microcircuits aims at understanding how combinations of motoneurons may be dynamically recruited during different motor patterns.

Posters Wednesday

Figure 1: Correlation in terms of R2 between C_3 and the rest of the electrodes (left side), and between C_4 and the other electrodes (right side) computed over the power spectrum for one subject within the frequency range (10-14 Hz). References 1 G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles,” Clin Neurophysiol, vol. 110, no. 11, pp. 1842–57, Nov 1999. 2 S. Salenius, A. Schnitzler, R. Salmelin, V. Jousm¨aki, and R. Hari, “Modulation of human cortical rolandic rhythms during natural sensorimotor tasks,” NeuroImage, vol. 5, no. 3, pp. 221 – 228, 1997. 3 Sarubbo S, De Benedictis A, Merler S, Mandonnet E, Balbi S, Granieri E, Duffau H, “Towards a functional atlas of human white matter,” Hum Brain Mapp. 2015 May 9. 4 C. Lindig-León, L. Bougrain. Comparison of sensorimotor rhythms in EEG signals during simple and combined motor imageries over the contra and ipsilateral hemispheres. 37th annual international conference of the IEEE EMBS 2015, Milan, Italy. (Submitted).

[W 91]

Transfer of control during movement automation

Charlotte Le Mouel1 , Romain Brette1 1. Institut de la Vision, Paris, France doi: 10.12751/nncn.bc2015.0212

Theoretical models of movement and motor control usually rely on an optimal control framework, which requires a central executive to have accurate knowledge of the dynamics and state of the body and environment. However, behavioral experiments and lesion studies reveal that this knowledge is distributed across the body and nervous system, with local control of the movement’s biomechanics at the level of the spinal cord, and a global control of the movement’s accuracy by supraspinal centres. Moreover, during skill learning and movement automation, some aspects of control are transferred from a global to a more local level. What neural plasticity mechanisms and computational principles underlie such a transfer?

212

[W 92]

Real-time Cerebellar Control of a Compliant Robotic Arm

Christoph Richter1 , Sören Jentzsch2 , Florian Röhrbein3 , Patrick van der Smagt2,3 , Jörg Conradt1

Flexible and compliant real-time control of artificial limbs is a challenging endeavor for conventional control algorithms. Conversely, the neuro-control of biological limbs is usually highly stable despite the apparent complex nonlinearities and flexibilities. Part of this is caused by cerebellar fast learning of motor actions embedded in a complex sensory feedback system. In an attempt to exploit this flexibility in a real-time robotic setting, we apply cerebellar control mechanisms[1] to operate a compliant, anthropomimetic robotic arm[2] built using the Myorobotics framework. The spiking neural network we use[3] is simulated in real-time on a SpiNNaker computer[4] . A custom interface[5] translates between SpiNNaker’s digital synaptic spikes and the robot’s sensors and actuators on a CAN bus. We demonstrate initial results of the implementation with a one-degree-of-freedom robotic joint as proof of principle. Here, we benefit from the modular and extensible design of both the Myorobotics framework and the SpiNNaker platform. Acknowledgements We thank J. Garrido for help in porting the network, the SpiNNaker and Myorobotics teams for their hardware and support. Work was partly funded by the Human Brain Project (Grant no. 604102) in EC FP7. References 1 Marr, D. (1969), "A theory of cerebellar cortex", The Journal of Physiology Vol. 202, No. 2., 1969 10.1113/jphysiol.1969.sp008820 2 Marques, H.G., Maufroy, C., Lenz, A., Dalamagkidis, K., Culha, U., Siee, M., and Bremner, P. (2013) "MYOROBOTICS: A modular toolkit for legged locomotion research using musculoskeletal designs", AMAM 6, 2013 3 Garrido J.A., Luque N.R., D’Angelo E. and Ros E. (2013) "Distributed cerebellar plasticity implements adaptable gain control in a manipulation task: a closed-loop robotic simulation". Front. Neural Circuits 7:159 10.3389/fncir.2013.00159 4 Furber, S.B., Galluppi, F., Temple, S., Plana, L.A. (2014) "The SpiNNaker Project", Proceedings of the IEEE, Vol. 102, No. 5, May 2014 10.1109/JPROC.2014.2304638 5 Denk, C., Llobet-Blandino, F., Galluppi, F., Plana, L. A., Furber, S., Conradt, J. (2013) "Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System". ICANN 2013, LNCS 8131, pp. 467–474, 2013. 10.1007/978-3-642-40728-4_59

[W 93] Object location and size influence parietal and premotor reference frames during object-oriented reach planning Bahareh Taghizadeh1,2 , Alexander Gail1,2,3 1. Sensorimotor Group, German Primate Center, Göttingen, Germany 2. Georg-August University, Göttingen, Germany 3. Bernstein Center for Computational Neuroscience, Göttingen, Germany doi: 10.12751/nncn.bc2015.0214

During visually guided reach planning, neurons in monkey parietal reach region (PRR) and dorsal pre-motor cortex (PMd) encode task-relevant spatial parameters of the upcoming reach movement. When reaches are directed towards objects then spatial 213

.

1. Electrical and Computer Engineering, Technische Universität München, Karlstr. 45, 80333 München, Germany 2. fortiss GmbH, Guerickestraße 25, 80805 München, Germany 3. Department of Informatics, Technische Universität München, Boltzmannstraße 3, 85748 Garching bei München, Germany doi: 10.12751/nncn.bc2015.0213

Posters Wednesday parameters for motor planning are partly constrained by features of the object, such as size and location. Our recent data suggests that in early stages of planning reaches towards different positions on the object spatial selectivity of a subset of neurons is consistent with an object-centered reference frame: neural responses depended on the position on the object but not on the location of the object, [1]. Here we test the combined effect of object location and object size on the reference frame of the neurons to see if not only the origin, but also the scale of the reference frame is object-based. A rhesus monkey was trained to conduct an object-based reach task. The monkey had to memorize a briefly flashed peripheral visual cue which could occur at one of five positions relative to a memorized extended visual object. The size and location, but not the shape, of the visual object changed in different trials. After a first delay period (visual memory) the object but not the cue re-occurred and the monkey had to reach to the previously cued target position on the object. Our preliminary results show a subset of neurons in PRR and PMd with object-relative spatial selectivity profiles that are independent of the location and size of the object. Our data suggests that the center and the scale of the reference in both areas are partly defined by the object. Single neurons in both areas encode a spectrum from size-scaled object-centered to non-scaled egocentric reference frames. This demonstrates flexible encoding in these areas suitable for transforming between ego- and allocentric reference frames when behavior demands interaction with objects. References 1 [1] Taghizadeh B., Gail A., Dynamic and scalable object-based spatial selectivity in monkey parietal reach region and dorsal premotor cortex. Program Number 437.14. Neuroscience 2014 Abstracts. Washington DC: Society for Neuroscience, 2014 Online.

[W 94] Amplitude and latency of EEG Beta activity during real movements, discrete and continuous motor imageries Sébastien Rimbert1,2,3 , Laurent Bougrain1,2,3 , Cecilia Lindig-león1,2,3 , Guillaume Serrière1,2,3 , Francesco Giovannini1,2,3 , Axel Hutt1,2,3 1. Artifiicial Intelligence and Complex Systems, Université de Lorraine/LORIA, Vandoeuvre-lèsNancy, F-54506, France 2. Neurosys Team, Inria, Villers-lès-Nancy, F-54500, France 3. Neurosys Team, CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506, France doi: 10.12751/nncn.bc2015.0215

Motor imagery (MI) modifies the neural activity within the primary sensorimotor areas of the cortex in a similar way to a real movement [1]. More precisely, beta oscillations (18-25 Hz), which are often considered as a sensorimotor rhythm, show that the amplitude of brain oscillations is modulated before, during and after a MI. Before a MI, compared to a resting state, there is a gradual decrease of power in the beta band of the electroencephalographic signal, called event-related desynchronization (ERD). Moreover, from 300 to 500 milliseconds after the end of the MI there is an increase of power called event-related-synchronization (ERS), or post-movement beta rebound, with a duration of approximately one second [2]. A large number of Brain-Computer Interfaces (BCIs) are based on the detection of MI in the electroencephalographic signal [3]. In most MI-based BCI experimental paradigms, subjects realize continuous MI, i.e. a prolonged intention of movement, during a time window of a few seconds with the objective to increase the detection of ERD and ERS. However, when the subjects 214

.

imagine a succession of movements, several ERD and ERS are generated, with lower amplitudes than those elicited by a single MI [4]. Thus, a simple short MI can be more useful to detect the ERD and the ERS. We devised an experiment involving eleven healthy subjects who carried out real movements, discrete, and continuous MIs, in the form of an isometric flexion movement of right hand index finger. The results suggest that both discrete and continuous MIs modulate ERD and ERS components. The ERS is very similar in both cases, but the ERD generated by a discrete MI is easier detectable, due to its higher power (35 ERD/ERS%) and lower variability (σ = 25 ERD/ERS%). On the other hand, continuous MIs generate a later ERS, as well as a more variable (σ = 50 ERD/ERS%) and less detectable ERD (15 ERD/ERS%). These findings suggest an improved experimental paradigm.

Grand average (n = 11) ERD/ERS% curves estimated for the real movement (top), the discrete motor imagery (middle) and the continuous motor imagery (bottom) within the beta band for electrode C3. Acknowledgements This research has been supported by the ERC grant Mathana. References 1 C. Neuper and G. Pfurtscheller, Handbook of electroencephalography and clinical neurophysiology. Event-related desynchronization. Elsevier, 1999, ch. Motor imagery and ERD, pp. 303-325. 2 G. Pfurtscheller and F. H. Lopes da Silva, "Event-related EEG/MEG synchronization and desynchronization: basic principles", Clin Neurophysiol, vol.110, no. 11, pp. 1842-57, Nov 1999. 3 E. W. W. Jonathan Wolpaw, Ed., Brain-Computer Interfaces Principles and Practice. Oxford university press, 2012. 4 B. E. Kilavik, M.Zaepffel, A. Brovelli, W. A. MacKay, and A. Riehle, "The ups and downs of beta oscillations in sensorimotor cortex." Exp Neurol, vol. 245, pp. 15-26, Jul 2013.

[W 95] Optogenetic control of hippocampal theta oscillations reveals their function in locomotion Maria Gorbati1 , Franziska Bender1 , Marta Carus Cadavieco1 , Natalia Denisova1 , Xiaojie Gao1 , Constance Holman1 , Tatiana Korotkova1 , Alexey Ponomarenko1 1. Behavioural Neurodynamics, Leibniz Institute for Molecular Pharmacology (FMP)/ NeuroCure Cluster of Excellence, Charité Campus Mitte Charitéplatz 1; Intern: Virchowweg 6, 10117 Berlin, Germany doi: 10.12751/nncn.bc2015.0216

Hippocampal theta oscillations support encoding of an animal’s position during spatial navigation, yet longstanding questions about their impact on locomotion remain 215

Posters Wednesday unanswered. Combining optogenetic control of hippocampal theta oscillations with electrophysiological recordings in mice, we found that hippocampal theta oscillations causally affect locomotion. We identified that their regularity underlies more stable and slower running speed during exploration. More regular theta oscillations were accompanied by more regular theta-rhythmic output of pyramidal cells. Theta oscillations were coordinated between hippocampus and its main subcortical output, the lateral septum (LS). Inhibition of this pathway, using chemo- (DREADDs) or optogenetics (halorhodopsin, eNpHR3.0), revealed its necessity for the hippocampal control of running speed. Theta-rhythmic optogenetic stimulation of ChETA-expressing LS projections to the lateral hypothalamus replicated the reduction of running speed induced by more regular hippocampal theta oscillations. These results suggest that changes of hippocampal theta synchronization are translated via the LS into rapid adjustment of locomotion. The present study shows that movement-dependent bottom-up modulation from subcortical regions to hippocampus is complemented by the top-down feedback, signaled by hippocampus to locomotor circuits. Our findings further suggest that hippocampal theta-rhythmic signaling is read out in parallel by cortical and subcortical regions, rapidly regulating exploratory activity according to representations of environment. Acknowledgements This work was supported by the Deutsche Forschungsgemeinschaft (DFG; Exc 257 NeuroCure, TK and AP; SPP1665, AP) and The Human Frontier Science Program (RGY0076/2012, TK).

[W 96] Roles for pacemaker properties and synaptic depression in robustness of a Central Pattern Generator. Mark Olenik1 , Conor Houghton2 , Stephen Soffe1 , Alan Roberts1 1. School of Biological Sciences, University of Bristol, Life Sciences Building, 24 Tyndall Avenue, Bristol BS8 1TQ, UK 2. Department of Computer Science, University of Bristol, Merchant Venturers Building, Woodland Road, Clifton BS8 1UB, UK doi: 10.12751/nncn.bc2015.0217

Oscillatory bursting activity driven by central pattern generators (CPGs) plays an important role in a variety of rhythmic behaviours, such as in locomotion, chewing, or respiratory activity. Brown’s (1911) hypothesis for locomotor rhythm generation suggested that synaptic depression of mutual inhibition provided a mechanism for burst termination, allowing bursts of spikes to rhythmically alternate between antagonistic halves of a CPG (Purvis et al. 2007). This hypothesis has been investigated in networks generating rhythmic struggling of Xenopus tadpoles. The study indicated that synaptic depression is adequate to generate oscillatory bursts, but lacked robustness to changes in model parameters, like synaptic conductance. An alternative mechanism to produce rhythmic bursts in CPGs is neurons with pacemaker properties. Complex dynamics between voltage gated and synaptic ion channels allow such pacemaker neurons to generate bursts of spiking activity intrinsically. Such bursty neurons may increase the robustness of rhythm generation in CPGs. The relative contributions of intrinsic pacemaker properties and synaptic depression to the robustness of rhythmogenesis in CPGs has not previously been addressed. In this study, we examine this question by considering a simple half-centre CPG of two neurons mutually coupled through inhibitory synapses. We compare the robustness of a model using pacemaker neurons with one based on synaptic depression of mutual inhibition. Numerical continuations of both 216

models with respect to changes in model parameters indicate that the inclusion of pacemaker neurons can dramatically increase the robustness of the network compared with one without pacemakers but with synaptic depression. The largest gain in robustness occurs when both mechanisms of burst generation coexist in the model. References 1 Brown 1911 10.1098/rspb.1911.0077 2 Purvis et al. 2007 10.1152/jn.00908.2006

Lea Ankri1 , Zoé Husson2,3,4 , Katarzyna Pietrajtis2,3,4 , Rémi Proville2,4,5 , Clément Léna2,4,5 , Yosef Yarom1 , Stéphane Dieudonné2,3,4 , Marylka Yoe Uusisaari1 1. Department of Neurobiology, Hebrew University of Jerusalem, Edmond and Lily Safra Center for Brain Sciences (ELSC), Israel 2. CNRS UMR8197, Ecole Normale Supérieure, 46 rue d’Ulm, 75005 Paris, France 3. Inhibitory Transmission Team, IBENS, Ecole Normale Supérieure, 46 rue d’Ulm, 75005 Paris, France 4. INSERM U1024, Ecole Normale Supérieure, 46 rue d’Ulm, 75005 Paris, France 5. Cerebellum Team, IBENS, Ecole Normale Supérieure, 46 rue d’Ulm, 75005 Paris, France doi: 10.12751/nncn.bc2015.0218

The cerebellum, a crucial center for motor coordination and timing of movements, is composed of a cortex and several nuclei. The main mode of interaction between these two parts is considered to be formed by the inhibitory control of the nuclei by cortical Purkinje neurons (De Zeeuw and Berrebi, 1995; Person and Raman, 2012; Najac and Raman, 2015). Despite the anatomical evidence for feedback nucleo-cortical pathways connecting the cerebellar nuclei to the cerebellar cortex (Tolbert et al., 1976; Chan-Palay, 1977; Batini et al., 1989; Houck and Person, 2015), it has been mostly ignored in physiological studies. Thus, a better understanding of the information processing in the cerebellar nuclei as well as its influence on cerebellar computation is needed. In this work we show that inhibitory GABA-glycinergic neurons of the cerebellar nuclei project profusely into the cerebellar cortex, where they make synaptic contacts on a subpopulation of cerebellar Golgi cells. Immunohistochemical examination of this subpopulation reveals that the contacted cells are GABAergic and electrophysiological recordings show that they fire spontaneously in a rhythmic manner. These Golgi cells are inhibited by optogenetic activation of the inhibitory nucleo-cortical fibers both in vitro and in vivo. Our data suggest that the cerebellar nuclei contribute to the functional recruitment of the cerebellar cortex by decreasing Golgi cell inhibition onto granule cells. This disinhibitory effect on the granule cell layer creates a time window for transmission of inputs from the mossy-fibers to the Purkinje cell layer (D’Angelo and De-Zeeuw, 2009) and therefore should be incorporated into the existing cerebellar models. 217

.

[W 97] A novel inhibitory nucleo-cortical circuit controls cerebellar Golgi cell activity

Posters Wednesday

Nucleo-cortical inhibition onto cerebellar Golgi cells Acknowledgements Research was supported by CNRS, INSERM and ENS, by Agence Nationale de la Recherche Grants INNET (BL2011) and Edmond and Lily Safra Center for Brain Sciences (ELSC). References 1 De Zeeuw C, Berrebi AS. Postsynaptic targets of Purkinje cell terminals in the cerebellar and ves-tibular nuclei of the rat. European Journal of Neuroscience 7: 2322–33, 1995 doi:10.1111/j.14609568.1995.tb00653.x 2 Person AL, Raman IM. Purkinje neuron synchrony elicits time-locked spiking in the cerebellar nuclei. Nature 481: 502–5, 2012 doi: 10.1038/nature10732 3 Najac M, Raman IM. Integration of Purkinje Cell Inhibition by Cerebellar Nucleo-Olivary Neurons. J. Neurosci. 35: 544–9, 2015 doi: 10.1523/JNEUROSCI.3583-14.2015 4 Tolbert DL, Bantli H, Bloedel JR. Anatomical and physiological evidence for a cerebellar nucleo-cortical projection in the cat. Neuroscience 1: 205–17, 1976 doi:10.1016/0306-4522(76)90078-6 5 Chan-Palay V. The cerebellar dentate nucleus, Springer-Verlag, Berlin, 1977 doi: 10.1007/978-3-64266498-4_1 6 Batini C, Buisseret-Delmas C, Compoint C, Daniel H. The GABAergic neurones of the cerebellar nuclei in the rat: projections to the cerebellar cortex. Neurosci. lett, 99: 251-256, 1989 doi: 10.1016/0304-3940(89)90455-2 7 Houck BD, Person AL. Cerebellar premotor output neurons collateralize to innervate the cerebellar cortex. J Comp Neurol , accepted article, 2015 doi: 10.1002/cne.23787 8 D’Angelo E, De Zeeuw C. Timing and plasticity in the cerebellum: focus on the granular layer. Trends in neurosciences, 32:30-40, 2009 doi: 10.1016/j.tins.2008.09.007

Other [W 98] Metaplasticity and spontaneous activity contribute to the heterosynaptic plasticity model of the granule cell model in vivo Azam Shirrafi Ardekani1 , Peter Jedlicka2 , Lubica Benuskova1,3 , Wicliffe C. Abraham3,4 1. Department of Computer Science, University of Otago, Dunedin, New Zealand 2. NeuroScience Center, J. W. Goethe University, Frankfurt, Germany 3. Brain Health Research Center, University of Otago, Dunedin, New Zealand 4. Department of Psychology, University of Otago, Dunedin, New Zealand doi: 10.12751/nncn.bc2015.0219

Long-term potentiation (LTP) and long-term depression (LTD) of synaptic efficacy are two forms of long lasting synaptic plasticity that underlie learning and memory in the brain. Granule cells of the hippocampal dentate gyrus, with the two main excitatory inputs, i.e. medial (MPP) and lateral pathways (LPP), manifest both LTP and LTD. Several experimental studies performed in vivo show that high-frequency stimulation (HFS) induces LTP at the MPP and concurrent LTD at the neighbouring untetanized LPP. The BCM theory postulates that previous average postsynaptic activity affects the size and magnitude of all current synaptic weight changes over the whole postsynaptic neuron. This phenomenon is also called metaplasticity. As well as the frequency of presynaptic spiking, there is evidence that the precise timing of pre- and postsynaptic 218

spiking determines the LTP and LTD magnitudes. This timing property is called the spike- timing dependent plasticity (STDP). We integrate these phenomena in a one unified model. For the granule cell model, we use the compartmental reduced morphology model described in Santhakumar et al.2005. As for the synaptic plasticity model, we use equations from Benuskova and Abraham 2007. These equations are modified in such a way that only the amplitude of LTD is dependent on the average postsynaptic voltage over some recent time (Fig. 1). However, we show that the data can be reproduced equally well when both the LTP and LTD amplitudes are metaplastically modified.

.

HFS is delivered to the MPP synapses only. Left: the simulated input spontaneous activity is on all the time at all the MPP and LPP synapses. Right: the input spontaneous activity was disabled at the LPP while still ongoing at MPP synapses.

[W 99] Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks Leon A. Gatys1,2,3 , Alexander S. Ecker2,3,4,5 , Matthias Bethge2,3,4 1. Graduate School of Neural Information Processing, Universty of Tuebingen, Germany 2. Bernstein Center for Computational Neuroscience, Tuebingen, Germany 3. Institute of Theoretical Physics and Centre for Integrative Neuroscience, University of Tuebingen, Germany 4. Max Planck Institute for Biological Cybernetics, Tuebingen, Germany 5. Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA doi: 10.12751/nncn.bc2015.0220

It is a long standing question how biological systems transform visual inputs to robustly infer high level visual information. Research in the last decades has established that much of the underlying computations take place in a hierarchical fashion along the ventral visual pathway. However, the exact processing stages along this hierarchy are difficult to characterise. Here we present a method to generate stimuli that will allow a principled description of the processing stages along the ventral stream. We introduce a new parametric texture model based on the powerful feature spaces of convolutional neural networks optimised for object recognition. We show that constraining spatial summary statistic on feature maps suffices to synthesise high quality natural textures. Moreover we establish that our texture representations continuously disentangle high level visual information and demonstrate that the hierarchical parameterisation of the texture model naturally enables us to generate novel types of stimuli for systematically probing mid-level vision. Acknowledgements This work was funded by the German National Academic Foundation (LG), the Bernstein Center for Computational Neuroscience (FKZ 01GQ1002) and the German Excellency Initiative (EXC307) (LG, AE, MB)

219

Posters Wednesday

[W 100]

Motion Direction Selectivity in the O. Degu Retina

Mónica Otero, Carlos Sepúlveda, Adrián G. Palacios, María-José Escobar doi: 10.12751/nncn.bc2015.0221

It has been widely accepted that the retina performs sophisticated computation to extract complex features of the visual scene. One of the most important visual features for survival is motion. Although motion detection has been basically associated to the visual cortex, experimental evidences suggest that a subset of ganglion cells in retina are able to detect movement along specific axes of the visual field. These neurons have been defined as Direction Selective Ganglion Cells (DCGCs). Using a 256-MutiElectrode-Arrays system, in a diurnal rodent, we registered the action potential activity of in-vitro retinal ganglion cells (RGC) in response to different drifting gratings and bars. We identified and characterized different RGC encoding motion features, together with a characterization of the main properties of their receptive fields (RF) that were characterized using checkerboards stimuli and estimated by Spike Triggered Average (STA) and Spike Triggered Covariance (STC). The contribution of different DCGCs populations to the motion direction selectivity mechanism was analyzed according to the discharge patterns of the neurons and having into account the change in parameters of the stimuli such as bar width, orientation, and velocity. We found several neuron populations with a variety of direction selectivity index in function of the bar width and speed. We can conclude that there are important evidences indicating that the retina is an early contributor for the motion detection coding in the visual system.

Samples of RGC responses to checkerboard and drifting gratings. Using checkerboard we characterized RFs using STA and STC. Drifting gratings were used to characterize motion direction selectivity in different spatio-temporal frequency bands. Acknowledgements FONDECYT 1110292, 1140403, ANR-47 CONICYT, Basal AC3E FB0008, Millennium Institute ICMP09-022-F.

220

[W 101] Numerical Model of Vesicle Motion Based on Statistics of Exocytosis Events in Chromaffin Cells Daungruthai Jarukanont1 , Prof. Dr. Martin E. Garcia1 , Imelda Bonifas Arredondo2 , Ricardo Femat2

Chromaffin cells release catecholamines by exocytosis, which is a process that includes docking, priming and membrane-fusion. Although solid contributions have been reported, the detailed mechanisms are still unclear, in particular regarding vesicle transport to active sites. Previous imaging studies report that, vesicles have directed movement, toward the membrane preceding exocytosis. In this work, in order to study vesicle motion, we perform prolonged stimulus amperometric experiments on Chromaffin cells to capture sustained releases. Due to the depletion of initially already primed and docked vesicles, in sustained activities. Each amperometric spikes corresponds to a single release from a new vesicle arriving at the active site. The time series resulting form the signal peaks, were normalized through a time-rescaling transformation to make signals from different cells comparable. We show that the statistics of all performed measurements considerably deviates from that of Poisson processes. This confirms the conclusions of previous imaging studies, that vesicle motion underneath the plasma membrane is not random. Moreover, the inter-spike time probability is reasonably well described by two-parameter gamma distributions. These finding support directed movements of vesicle toward the membrane. Thus, the biotic model is identified as a mathematical model including physical properties (Langevin process). In addition, we performed Langevin simulations to describe the release statistics and to reproduce our measurements. The agreement between simulations and experiment is appropriated if we assume that vesicles are directed to the membrane by an attractive harmonic potential.

[W 102]

Image recurrence across saccades is encoded in the retina

Vidhyasankar Krishnamoorthy1,2 , Michael Weick1,2 , Tim Gollisch1,2 1. Department of Ophthalmology, University Medical Center Göttingen, 37073 Göttingen, Germany 2. Bernstein Center for Computational Neuroscience Göttingen, 37073 Göttingen, Germany doi: 10.12751/nncn.bc2015.0223

Natural vision such as a saccadic scene change provides complex spatio-temporal visual input to the retina, where the visual signals are segmented into brief image fixations separated by global motion signals. It has been shown that the activity of retinal ganglion cells (RGCs) is strongly modulated during saccade-like image shifts, either by way of short bursts of spikes or by suppression of spiking activity. However, it is not known how the stimulus history (i.e., the saccade and an earlier fixation) shapes the response to the current fixation. Here we address this problem by studying the retinal coding under simulated saccadic vision. We recorded spiking activity from retinal ganglion cells (RGC) of isolated mouse retina to a stimulus with saccade-like image shifts. We analysed the RGC responses to both the saccade-like motion and the fixation after a saccade. Surprisingly, we observed a group of cells that selectively responded 221

.

1. Physics, University of Kassel, Kassel, Germany 2. Mathematics, IPICYT, San Louis Potosi, Mexico doi: 10.12751/nncn.bc2015.0222

Posters Wednesday to scenarios when the newly fixated image is similar to the image before the saccade. This sensitivity to “image recurrence” was robust to contrast, saccade duration and spatial scale of the stimulus. We could observe these responses for a variation of the stimulus with slow drift followed by a rapid reset to the same fixation. We also found such “image-recurrence-sensitive” responses to saccadic-shifts of natural images. We identified the cells that show these responses as transient OFF cells with large receptive fields. Furthermore, we show that this response is mediated by a serial inhibition circuit, where inhibition of RGCs is suppressed specifically during image recurrence. Our results demonstrate that saccade-like image transitions elicit novel and unexpected dynamics, allowing the cells to detect image recurrence across saccades.

[W 103] Brain-Computer Interfacing in Amyotrophic Lateral Sclerosis: Implications of a Resting-State EEG Analysis Vinay Jayaram1,2 , Natalie Widmann1 , Christian Foerster3 , Tatiana Fomina1,2 , Matthias Hohmann1,2 , Jennifer Mueller vom Hagen4 , Matthis Synofzik4 , Bernhard Schoelkopf1 , Ludger Schoels4 , Moritz Grosse-Wentrup1 1. Empirical Inference, Max Planck Institute for Intelligent Systems, Spemannstrasse 38, Germany 2. IMPRS for Cognitive and Systems Neuroscience, University of Tuebingen, Wilhelmstrasse 11, Germany 3. Computer Science, University of Tuebingen, Wilhelmstrasse 11, Germany 4. Hertie Institute for Clinical Brain Research, Hoppe-Seyler-Strasse 3, Germany doi: 10.12751/nncn.bc2015.0224

Despite decades of research and exhaustive characterization from a medical perspective [1], the electrophysiological effects of the Amyotrophic Lateral Sclerosis (ALS) remain very poorly characterized, possibly because for the majority of the history of ALS it has been considered a purely motor disorder. As such, the electroencephalographic (EEG) signal has been relied upon these intervening decades as the best non-invasive method of interaction with paralysed patients by brain output alone. ALS patients have been a fertile ground for Brain-Computer Interface (BCI) study–studies which, despite early success [2,3,4], have recently proved unable to maintain the rate of progress [5]. It is the aim of this study to help shed light on subtler effects of ALS on the recorded EEG, and thereby try to understand why recent BCI efforts have met such little success. In order to help understand this failure, we have recorded high-density resting state data from six non-demented ALS patients and thirty-two healthy controls to investigate for group differences. While similar studies have been attempted in the past [6], none have used high-density EEG or tried to distinguish between physiological and non-physiological sources of the EEG. Using the SOBI algorithm [7], we decomposed the high-density EEG into its independent sources and only kept those that had features corresponding to cortical sources according to current research [8,9]. Averaging over these cortical sources, we find a global increase in high gamma power (50–90 Hz) that is not specific to the motor cortex, suggesting that the mechanism behind ALS affects non-motor cortical regions even in the absence of comorbid cognitive deficits. 222

References 1 Rowland and Schneider 2001 10.1016/j.clinph.2014.09.017 2 Birbaumer et al. 1999 10.1038/18581 3 Kuebler et al. 2005 10.1212/01.WNL.0000158616.43002.6D 4 Nijboer et al. 2008 10.1016/j.clinph.2008.03.034 5 Marchetti and Priftis 2014 http://dx.doi.org/10.1016/j.clinph.2014.09.017 6 Mai et al. 1998 10.1016/S0013-4694(97)00159-4 7 Belouchrani et al. 1997 10.1109/78.554307 8 Onton et al. 2006 10.1016/j.neubiorev.2006.06.007 9 M. Grosse-Wentrup et al. “How to test the quality of reconstructed sources in independent component analysis (ICA) of EEG/MEG data,” in Pattern Recognition in Neuroimaging (PRNI), 2013 International Workshop on, pp. 102–105, IEEE, 2013.

[W 104] Investigation of cultured neuronal networks and their dynamics induced by novel stimulation paradigms Anja Chilian, Andras Katai, Peter Husar 1. Bio-Inspired Technologies, Fraunhofer Institute for Digital Media Technologies, Ilmenau, Ehrenbergstr. 31, Germany doi: 10.12751/nncn.bc2015.0225

The behavior of cultured neuronal networks can be specifically controlled and investigated using electrical stimulation. For this purpose, we developed novel stimulation paradigms representing main characteristics of all sensory encoders. For controlled conditioning of in vitro cultured neurons we applied these patterns in tetanic stimulation. These stimulation protocols were designed for both 2D and 3D multi electrode arrays (MEAs). Here we present first results of applying the novel stimulation paradigm on neurons cultured on 2D MEAs. MEA recordings were analyzed using specifically adapted methods for spike detection and spike sorting as well as further methods for connectivity analysis. Results show how the novel stimulation paradigm influences neural network dynamics. Acknowledgements 3DNeuroN is an international collaborative research project funded from the European Commission’s Future and Emerging Technologies (FET) scheme

223

.

Topological plots of the difference in mean log-bandpower between control and ALS patients (red, higher power in controls) for each frequency band.

Posters Wednesday

[W 105] in vitro

Signal acquisition and measurement system for neural networks

Thomas Just, Peter Husar 1. Biosignal Processing, Ilmenau University of Technology, Ilmenau, Ehrenbergstr. 29, Germany doi: 10.12751/nncn.bc2015.0226

For a 3D-MEA measuring system we developed an signal acquisition and conditioning ASIC chip. The chip is able to filter and to amplify 80 channels at once with an sampling rate between 15 ksps and 30 ksps in everey channel simultaneously. 80 input signals are pre-amplified and filtered with low noise amplifiers. In the next stage the signals are sampled in 160 sample-and-hold units which realize the snapshot function (reading from 80 sensors at one time and saving 80 samples from last timestamp). Several internal MUXs provide the signals to 4 output channels. 10 chips can be synchronized to read out 800 channels in sum from a 3D-multi sensor array. References 1 Neuronal cell spike sorting using signal features extracted by PARAFAC Thomas Just, Thomas Kautz, Martin Weis, Adam Williamson, Peter Husar 6th Annual International IEEE EMBS Conference on Neural Engineering, San Diego; 11/2013 2 Spike Detection and Sorting Using PARAFAC2 Method Thomas Just, Martin Weis, Peter Husar 36th Annual International IEEE EMBS Conference, Chicago; 08/2014

224

.

Index

225

Index

Authors Abarbanel H, 28 Abraham WC, 218 Adam V, 198 Aertsen A, 68, 170, 171 Aglioti SM, 209 Ai H, 151 Akhavan M, 159 Alagapan S, 92 Alipour A, 46 Alizadeh S, 96, 135 Allefeld C, 139, 140 Allen K, 82, 83 Amin H, 174 Amitai Y, 90 Angelhuber M, 170, 171 Angle MR, 89 Ankri L, 217 Antic B, 26 Apicella I, 51 Aschauer D, 169 Ashida G, 194 Asthana MK, 31 Auth JM, 114 Axenie C, 186 Büchel C, 19 Büchler U, 26 Bányai M, 44 Baden T, 131 Bagheri N, 159 Bai S, 158 Baier H, 203 Bakker R, 157 Baladron J, 161, 163 Balaguer-Ballester E, 178 Ball T, 177 Balzani E, 205 Barak O, 47 Bartels A, 39 Bassetto G, 146 Battaglia F, 21 Bauermeister C, 48 Bayat FK, 97 Bayati M, 94 Becker B, 55 Becker C, 108 Behrens C, 27, 184 Beining M, 168 Benda J, 48, 136, 203 Bender F, 215 Benuskova L, 218 Berberich S, 192 Berdondini L, 174 Berens P, 27, 131, 184 Bergmann K, 195 Bernardi D, 71 Bethge M, 27, 130, 131, 185, 219 Bettler B, 52 Beuth F, 61 Bibichkov D, 193

226

Bill J, 36 Blanco-Hernández E, 76 Boboeva V, 128 Boedecker J, 55 Bohte SM, 115 Boi F, 205 Bollmann J, 210 Bollmann JH, 185 Bonifas Arredondo I, 221 Borchardt V, 152 Both M, 84 Bougrain L, 211, 214 Bratley C, 30 Braun HA, 49 Braun J, 34, 48, 207 Breakspear M, 152 Breit M, 99 Breitwieser O, 36 Brette R, 212 Brewer GJ, 92 Brito CSN, 97 Brochier T, 60 Broguiere N, 183 Brown EN, 20 Brunel N, 53 Buhry L, 154 Bullmore E, 18 Bushong E, 99 Bytschok I, 36 Canova* C, 137 CAO R, 34 Capone C, 71 Carus Cadavieco M, 215 Carus-Cadavieco M, 66 CESSAC B, 50 Chambers C, 198 Cheng G, 116 Cheng S, 94 Chilian A, 223 Chizhov A, 35, 144 Chorley P, 134 Christophel TB, 122 Ciftci K, 35 Clapp SW, 200 COFRE R, 50 Conradt J, 155, 156, 186–188, 213 Cosentini I, 205 Csordas D, 86 Cunliffe VT, 195 Cuntz H, 156, 168 Dahmen D, 62 Davey N, 78 Davison A, 134 de Candia A, 51 de Kock CP, 196 Deger M, 119, 132 Del Giudice P, 71 DeMarse TB, 92

Ebrahimpour R, 159 Ecker A, 146 Ecker AS, 185, 219 Egert U, 55, 56, 67 Egger R, 196 Ehrlich S, 116 Ekramnia M, 124 Elephant Community, 134 Ellisman M, 99 Encke J, 165, 167 Endres DM, 180 Engelken R, 79 Eppler B, 169 Escobar M, 220 Esir P, 57, 75 Euler T, 131, 184 Everding L, 188 Förster C, 120 Falappa M, 205 Farkhooi F, 58 Fauth M, 110 Fedotenkova M, 149 Femat R, 221 Ferreira CC, 199 Firouzi M, 186, 187 Fitzpatrick D, 145 Foerster C, 222 Fomina T, 120, 222 Forró C, 78 Frank LM, 21 Freund TF, 69 Froudarakis E, 131 Fucke T, 192 Gülçür HÖ, 97

Güveniş A, 97 Gail A, 118, 213 Gais S, 96, 135 Gao X, 215 Garagnani M, 175 Garcia PDME, 221 Gardner B, 37, 85 Gass P, 95 Gatys LA, 219 Gerstein G, 137 Gerstner W, 59, 97, 119 Ghorbani S, 160 Giese MA, 180 Giovannini F, 154, 214 Gjorgjieva J, 91 Glasauer S, 41, 187 Goenner L, 122 Gollisch T, 188–191, 221 Gorbati M, 66, 215 Grün S, 60, 137, 138 Grüning A, 37, 85 Graben Pb, 149 Graboski JJ, 76 Graham L, 144 Grewe J, 136, 203 Grosse-Wentrup M, 120, 222 Gruen S, 134 Gu Y, 181 Guo T, 158 Gupta BK, 127 Gutschalk A, 121

.

Demenescu R, 152 Demkó L, 78 Denfield GH, 185 Denisova N, 215 Denk W, 210 Denker M, 60, 137 Denker* M, 134 Di Marco S, 174 Diamond ME, 117 Dias C, 205 Dicke PW, 180 Diesmann M, 36, 60, 62, 157, 164 Diester I, 52 Dieudonné S, 217 Dinkelbach HÜ, 38 Dokos S, 158 Dolan RJ, 18 Donoso JR, 64 Donovan JC, 203 Dragoi V, 147 Draguhn A, 84 Druzin M, 144 Dunn BA, 141 Dupret D, 30 Dupuy N, 98 Durstewitz D, 53, 54, 84, 95, 101, 133, 195 Dykstra AR, 121

Häusler S, 39, 40 Haas O, 204 Haavik J, 160 Haenicke J, 173 Hagens O, 59 Hahn T, 192 Halfmann M, 102 Hamker F, 122 Hamker FH, 38, 61, 161–163 Hanuschkin A, 52 Hartmann C, 113 Hass J, 192 Haynes J, 139, 140 Haynes JD, 122 Heidari-Gorji H, 159 Hein B, 145 Heinze HJ, 152 Helias M, 62, 137, 164 Hemmert W, 158, 165–167, 192 Henke J, 204 Hennig MM, 168 Herman P, 98 Herpich J, 111 Herrmann MJ, 31 Hertäg L, 53 Hertz J, 141 Herz A, 99 Herz AV, 39, 40, 86 Higgins DC, 109 Hilgen G, 168 Hohmann M, 222 Hohmann MR, 120

227

Index Holca-Lamarre R, 197 Holman C, 215 Holstein D, 134 Holzbecher A, 100 Hondrich T, 84 Hota R, 108 Hota RN, 148 Houghton C, 216 Huebner D, 66 Husar P, 223, 224 Husson Z, 217 Hutt A, 149, 154, 214 Huys QJ, 123 Hyttinen JA, 142 Hyttinen JAK, 63, 70, 143 Ikeno H, 151 Isayed A, 150 Ito J, 134, 138 Jamalabadi H, 96, 135 Jarukanont D, 221 Jarvis S, 56 Jayaram V, 120, 222 Jedlicka P, 168, 218 Jennings T, 134 Jentzsch S, 213 Johansson S, 144 Jordan J, 36 Jovanovic S, 141 Junker M, 180 Just T, 224 Kühn NK, 188 Kümmerer M, 130 Káli S, 69 Kadakia N, 28 Kai K, 151 Kamyshanska H, 193 Kappen HJ, 72 Kapucu FE, 142, 143 Kaschube M, 145, 169, 193 Katai A, 223 Kato A, 104 Kaushalya SK, 195 Kawaguchi Y, 172 Kbah SN, 179 Kellner CJ, 136 Kelly R, 107 Kempter R, 64, 65, 100 Keren H, 48 Khani MH, 189 Kilavik B, 60 Kilias A, 67 Kim C, 67 Kirsch P, 101 Kirst C, 39 Kiskin N, 89 Kleinsteuber M, 125 Kleppe R, 160 Koepcke L, 145 Kohler L, 83 Kollo M, 89 Kondo M, 172

228

Koppe G, 95, 101 Koren V, 147 Kornienko O, 133 Korotkova T, 66, 215 Krausse AL, 152 Kretzberg J, 145, 194 Krishnamoorthy V, 221 Kuebler ES, 91 Kulkarni R, 128 Kumar A, 66–68, 170, 171 Kumar SS, 55 Kumaraswamy A, 151 Kuner R, 195 Lücke J, 197 Léna C, 217 Lancier S, 103 Landgraf T, 105 Latuske P, 82 Le Mouel C, 212 Leibold C, 29, 135, 204 Lenk K, 63, 70 Li M, 152 Li S, 152 Liedtke J, 182 Lindig-León C, 211 Lindig-león C, 214 Lindner B, 71 Ling D, 78 Lis S, 101 Liu JK, 189 Lochmann T, 147 Loebel A, 41 Lonardoni D, 174 Lord A, 152 Lovell NH, 158 Lucon E, 45 Luhmann HJ, 90 Mühlberger A, 31 Müller vom Hagen J, 120 Macke J, 27 Macke JH, 146 Mackwood O, 109 Maier N, 64 Malinina E, 144 Mallot HA, 102, 103 Malsburg Cvd, 108 Manoonpong P, 207 Martinez A, 160 Mattia M, 34, 71 Mayer NM, 42 McNamara C, 30 Mehler J, 124 Meier K, 36 Meister M, 91 Memmesheimer R, 72 Mengiste S, 68 Mensi S, 59 Menzer F, 125 Mergenthaler K, 175 Metzner C, 208 Meyes R, 134 Michalikova M, 65

Nachstedt T, 112, 114 Nagele J, 86 Narayanan RT, 196 Naud R, 77 Nawrot MP, 73, 105, 173 Neuschwander K, 145 Nieus T, 174 Nikbakht N, 117 Nikolaev A, 195 Nonnenmacher M, 27 Nopp P, 167 Oberlaender M, 196 Obermayer K, 147, 175, 197 Özmen B, 86 Okujeni S, 55 Olenik M, 216 Ommer B, 26 Onken A, 87, 189 Orbán G, 44 Oschmann F, 175 Otero M, 220 Pérez Escobar JA, 83 Palacios AG, 220 Palazzolo G, 183 Palmigiano A, 80 Pamir E, 173 Panzeri S, 87, 189 Paraskevov A, 74, 88 Pastukhov A, 34 Patirniche D, 99 Pavone EF, 209 Pawelzik K, 106 Pearlmutter BA, 107 Pelofi C, 198 Perez-Garci E, 52 Petrovici M, 36 Phan LD, 134 Phan S, 99 Pietrajtis K, 217 Pillow J, 20 Pirmoradian S, 168 Ponomarenko A, 66, 215 Pozzorini C, 59 Pressnitzer D, 198 Preuss SJ, 185 Proville R, 217 Prozmann V, 102 Prsa M, 180 Psarrou M, 78 Pulvermüller F, 175

Quaglio P, 134 Queisser G, 99 Quian Quiroga R, 117 Quiroga-Lombard CS, 195 Räisänen E, 70 Röhrbein F, 213 Racz RR, 89 Ramesh V, 108, 148 Ramirez-Amaro K, 116 Ramm F, 66 Ramos Traslosheros Lopez LG, 190 Rautenberg PL, 151 Rebollo B, 71 Reif A, 31 Reimer J, 131 Remme M, 65 Renz DL, 123 Reyes-Puerta V, 90 Richter C, 186, 213 Richter H, 95 Riedmiller M, 55 Riehle A, 60 Rifai K, 205 Rigotti M, 23 Rimbert S, 214 Ritter K, 140 Rizza G, 209 Roberts A, 216 Rodrigues AR, 199 Roelfsema PR, 115 Rosón MR, 131 Rost T, 73 Rostami V, 134, 138 Rotter S, 56, 67, 177 Roudi Y, 141 Rowbottom A, 85 Rozenblit F, 191 Rumpel S, 169 Rupp A, 178 Russo E, 84

.

Mikkonen JE, 142 Miner D, 113 Molano-Mazon M, 87 Mongiat LA, 168 Monteforte M, 80 Morishima M, 172 Morita K, 104, 172 Morrison A, 132 Mueller vom Hagen J, 222 Mulas M, 155, 156 Munoz Cespedes A, 71

Sáray S, 69 Safaai H, 87 Safavieh E, 56 Sahani M, 198 Sahasranamam A, 67 Salmasi M, 41 Salminen N, 201 Sanchez-Vives MV, 71 Sandhaeger F, 146 Santoscoy PM, 195 Sauer M, 45 Scarpetta S, 51 Schäfer L, 177 Schölkopf B, 120 Schöls L, 120 Schönauer M, 96, 135 Schaefer AT, 89 Schemmel J, 36 Schiefer J, 177 Schilstra M, 78 Schleich P, 167 Schmidt M, 157

229

Index Schmidt* M, 164 Schmitz C, 192 Schmitz D, 64 Schmoldt D, 105 Schneider JJM, 154 Schneider S, 31 Schoelkopf B, 222 Schoels L, 222 Schottdorf M, 206 Schreyer H, 189 Schrobsdorff H, 206 Schubert T, 184 Schuecker* J, 164 Schuster J, 162 Schutte M, 166 Schwab ME, 26 Schwarzacher SW, 168 Schweikard A, 208 Seeber B, 201 Seeber BU, 125, 200, 202 Seeholzer A, 119 şengör NS, 179 Semmelhack JL, 203 Semprini M, 205 Sepúlveda C, 220 Sernagor E, 168 Serrière G, 214 Shamir M, 90 Shani I, 90 Shirrafi Ardekani A, 218 Sigurdsson T, 22 Simonov A, 57, 75 Sinz F, 203 Sirota A, 76 Sleigh JW, 149 Smilgin A, 180 Smith GD, 145 Sobolev A, 136 Soch J, 139 Soffe S, 216 Sompolinsky H, 91 Sonnenberg L, 48 Sonntag M, 136 Spinelli G, 209 Spreizer S, 171 Sprekeler H, 77, 109 Sprenger J, 134 Stühmer W, 206 Stannat W, 45 Stemmler M, 41, 86, 99 Stemmler MB, 39 Stephan KE, 19 Steuber V, 78 Štih V, 188 Stoewer A, 136 Sun J, 90 Sun Z, 180 Suriya-Arunroj L, 118 Svara F, 210 Synofzik M, 120, 222 Tabas A, 178 Tafreshiha A, 117 Taghizadeh B, 213

230

Takanen M, 201 Tamimi H, 150 Tanskanen JM, 142 Tanskanen JMA, 143 Tauffer L, 66, 168 Tchaptchet A, 49 Teichmann M, 162 Teixeira CEC, 199 Tejero-Cantero Á, 30 Temizer I, 203 Tetzlaff C, 110–112, 114 Tetzlaff T, 36 Thalmeier D, 72 Theis L, 130, 131 Thier P, 180 Thivierge J, 91 Thurley K, 126, 204 Tieri G, 209 Tiwari NK, 127 Toader O, 82 Tolias A, 131 Tolias AS, 185 Tollin DJ, 194 Tomasello R, 175 Torre E, 134 Torre* E, 137 Toutounji H, 54 Treves A, 128, 181 Triesch J, 113 Trivedi CA, 185 Trouche S, 30 Tsai D, 158 Tucci V, 205 Uhlmann M, 72 Urdapillete E, 181 Uusisaari MY, 217 Völk F, 192 Vörös J, 78 van Albada S, 164 van Albada SJ, 157 van Beest EH, 140 van den Meer J, 152 van der Smagt P, 213 van der Veldt S, 66 van Rossum M, 98 Vato A, 205 Veerasavarappu S, 108 Vidaurre D, 30 Villagrasa F, 163 Vitay J, 38, 122 Vollmayr B, 95 Vornanen I, 63 Wörgötter F, 110–112, 114, 207 Wülfing J, 55 Wachtler T, 136, 151 Wahl A, 26 Wahl S, 205 Walker N, 102 Wallis TSA, 130 Walter M, 152 Waniek N, 155, 156, 188

Weick M, 189, 190, 221 Weigand M, 156 Weinkauf T, 99 Weis T, 108 Weiss RS, 166 Wennekers T, 175 Westkott M, 106 Weydert S, 78 Wheeler BC, 92 Whitney D, 145 Widmann N, 120, 222 Wieland S, 71 Wijetillake AA, 202 Wild B, 105 Wimmer R, 55 Winterer L, 55 Wirtz C, 167 Wolf F, 79, 80, 182, 206 Woolrich M, 30 Wray W, 89 Wykowska A, 116 XU T, 47

.

Yan C, 122 Yanez A, 84 Yarom Y, 217 Yegenoglu* A, 134 Yizhar O, 22 Zabbah S, 159 Zaleshin A, 81 Zaleshina M, 81 Zambrano D, 115 Zaytsev YV, 132 Zehl L, 60 Zendrikov D, 74 Zenobi-Wong M, 183 Zoccolan D, 117 Zurowski B, 208

231

Published by Bernstein Center for Computational Neuroscience Heidelberg-Mannheim Central Institute for Mental Health J5, 68159 Mannheim Ruprecht-Karls-Universität Heidelberg Im Neuenheimer Feld 326, 69120 Heidelberg www.bernstein-conference.de