Notes taken by Stephan Holzer =================================== 11:00-11:30 - Ila Fiete - UT Austin Title: Distributed neural codes for navigation in mammals Components of biological systems are noisy. Therefore computation in biological systems is noisy. This was measured e.g. by presenting the same stimulus to neurons which showed some variability in the responses. We can model this by a stochastic process (e.g. based on data measured from neurons). While computing or processing information, the noise accumulates. This might be problematic, but nature can get around this to some extent. An example are desert ants that search for food and are able to navigate. While searching for food their path seems partially random. However, it turns out that ants can remember directional information and thus are able to clean out accumulated noise from the overall direction of the path. Evidence for this is that after finding food an ant goes straight back to the nest by walking into the correct direction (instead of going back the path it took the food, or a wrong direction). The question arises how the brain can do this while suffering from recursive noise accumulation mentioned above. One solution to reduce error is to let several neurons each encode the same information. They might fire at different thresholds, but by counting the number of spikes one can retrieve the stimulus value more accurately. Such a setup is called Classical (Neuron) Population Codes (CPCs) and one can use Fischer Information Theory to prove that the mean squared error is kept low by CPCs (error control performance). Given a CPC of n neurons, one can prove a 1/n reduction in squared error (and thus a polynomial increase in accuracy). On the other hand this results in a linear decrease in the information rate, which is the number of information bits encoded in the total bits available. Is this a good use of the given neurons? Is it possible to use neurons with better codes? In theory (when not restricted to neurons) Shannon showed in 1948 that one can obtain an exponentially vanishing error at asymptotically finite information rate. Let’s go back to real implementations made by nature. An example of coding in brains are place cells. In an experiment a rat is placed into a box. Place cells reside in any rat’s brain and have a behavior (e.g. when it fires) that indicates that they like certain places in the box better than others. These cells are also organized as CPCs by having multiple copies. Another example of coding in the brain are grid cells, which are reported to be found in mice, humans and bats. A grid cell has multiple spots in the box that it likes and fires upon visiting them. Interesting is the observation that peaks of spikes occur on nodes of a triangular tiling of the plane. Even when lightsare turned off, these cells fire. This is based on location estimates due to sensory information. Using those cells, rats have information on their location in an area as wide as e.g. 100m x 100m. However, the distance between grid-points at which grid cells fire is typically between 0.3m-3m (called the period of the grid cell). So how can rats know their position in a 100m x 100m area? Combining grid-cells of different periods can encode larger distances. E.g. grid cells for 2m and 3m only fire at the same time every 6m and grid cells for 2m and 3m and 0.7m only fire at the same time every 42m. In this sense grid cell codes extend classical population codes to a more combinatorial code. ==================================== 11:30-12:00 - Alex Cornejo - Harvard Title: Task allocation in ant colonies This research studies task allocation in ant colonies. In this talk the authors propose a mathematical model to perform division of labor and derive distributed algorithms based on this. Their model considers a set of n ants A (e.g. n=100 ants). A set T of tasks (which does not depend on the number of ants) that typically has size smaller than A (e.g. 3 tasks). For each ant and task there is an energy demand d to perform a given task at time t. The value of d depends e.g. on the weather outside the next, the environment and the size of the ant. One can also specify the energy that can be supplied by an ant at a time t to work on a task. Based on this a good task allocation should be derived, which can be interpreted as an assignment that maps each pair of (ant,time)-value to some task. Each such allocation corresponds to a certain amount of energy that is needed. A good task allocation minimizes both, underproduction and overproduction of energy for a task (based on the energy supplied by the ants assigned to the task). In order to derive an algorithm that finds a good task allocation, they consider the following abstraction as a computing platform: ants can communicate synchronously and each ant has constant memory, but a large computational power (restricted to this memory). Each ant can sense whether there is a demand of energy for a given task and this sense is modeled as a binary feedback function f. Now the proposed algorithm performs a randomized binary search strategy to reach optimal division of labor based on f. However, to perform this on a colony of size n one needs O(log n) memory, while ants have only constant memory. It is demonstrated how the memory demand can be satisfied by encoding it in the colony. The authors propose that ants can be in five different states: RESTING, FIRSTRESERVE, SECONDRESERVE, TEMPWORKER, COREWORKER. Furthermore it is assumed that ants prefer doing a task that they have done previously and a potential table is created for the tasks that indicates their preference. Based on this ants change states during the binary search. It is argued that the proposed algorithm converges to a near-optimal division of labor in time O(log n). ================================================================================== 12:00-12:20 - Contributed Paper: Shashank Singh, Saket Navlakha and Ziv Bar-Joseph Title: Distributed and computationally efficient belief propagation based on swarms of foraging bacteria This talk presented a different concept of foraging. Previous talks presented the setting where insects move the food to the swarm. In the case of E. coli bacteria the swarm moves to the food. Members of the swarm diffuse protein signals as a form of communication, but can’t leave solid paths as the environment is very stochastic. In contrast to e.g. ants, all swarm-members can be considered to be of same type (no types like nurse, queen, forager). Furthermore these bacteria do not have extremely limited computational power and no navigational ability (while some insects have this). Still they are able to maximize an objective function that describes the density of the food, which is a non-convex function and can have several local maxima. Therefore the swarm performs a random walk based on changes in food density. To be more precise, bacteria can measure the food density while walking and bias their random choice of direction towards higher food density. Although this eventually converges to a local maximum of the food diffusion this strategy might be very inefficient. A speedup of convergence in this random walk towards the local maximum is achieved by (partially) adapting the movement of a single cell to depend on its distance to other bacteria. This is achieved by three concepts. 1) Repulsion: If a cell notices a nearby area with too many bacteria, it moves away. 2) Attraction: If there are too few bacteria in an area, it moves there. 3) Orientation: If there is a good amount of bacteria in the area, the bacteria tries to move in direction parallel to the others. It turns out that these three rules accelerate the speed at which the swarm detects a local maximum if the direction is clear. This strategy also avoids collisions and keeps the swarm together, which yields better communication. In previous work that simulated bacteria’s behavior using these rules the following assumptions were made for the communication ability of the bacteria: messages can be continuous numbers and receiver’s measurements are precise. It was also assumed that receivers can figure out the identity of a message’s sender, which requires O(log n) extra bits per message. The authors propose a new more realistic model where bacteria can send only few bits, can measure only approximately and cannot identify the sender of a message. In experimental simulations the researchers considered different path lengths of the diffused food that lead towards a local maximum density. In these simulations they compared different models such as the two described above as well as different weights when assigning probabilities for the random walk based on directional food density. A counterintuitive result from these simulations is that the more limited communication of the proposed model yields better results. Certain weights for the random walk help to achieve different shapes (not only balls) that could get around obstacles. Based on these results the question arose what happens when some of the agents are silent. This is a reasonable question as broadcasting consumes energy and many messages are sent redundantly. It turned out that even when only 10% of bacteria communicate, one gets a significant improvement compared to a setting where all bacteria are silent. But it does not improve that much more when all communicate. Conclusion: there is only little communication going on. Future modifications of the model might include the influence of bacteria’s velocity by the strength of the increase of food density in a unique direction.