Notes taken by Cameron Musco Stephan Pratt: Distributed information processing by insect societies - Focus on Themnothorax Ants, primarily in a laboratory setting (not in the wild) - Typical Algorithm: competition between different positive feedback processes - For example, scout ants lead tandem runs to new homes they find with probability based on the quality of the nest - Leader emits a pheromone that the follower follows. Follower uses tactile communication so the leader knows she is following. Question: Why can’t they just follow the leader using vision? Answer: They use vision to learn the route to the nest. Seems as though they need to do this learning and the following on separate channels. Question: Is the following ant also a scout? Answer: Yes Question: Is all information on nest quality encoded in the probability to recruit? Answer: It seems to be that way. Ants that see better nests don’t recruit more vigorously or successfully than ants that see less good nests. They just seem to recruit with higher probability. - Ants actually compare nests - not just a satisficing algorithm - Migrate faster in urgent situations (i.e. home nest is destroyed) than in less urgent situations - Individual Vs. Collective Behavior - Individuals experience cognitive overload. Do much worse choosing between many nests than between 2 nests. - In group, each individual only considers 1 or 2 nests and decision making over many nests is much better Question: Would preventing the ants from reaching quorum at a nest cause scouts even in the group setting to explore too much and experience cognitive overload? Answer: Possibly, although there may be other social information aside from the quorum sensing. - Overall, colonies do better with more nests, but individuals have a higher maximum discrimination - i.e. colonies are more prone to mistakes. Individuals have rapid information divergence while colonies are more prone to taking wrong paths. - Note: Investigating ants as individuals is ‘odd’ as they are inherently social, and its unclear that they should behave normally at all when isolated. But it is a powerful probing tool. Radhika Nagpal: Cells, Termites, and Robot Collectives - General question of relating global structure to individual behavior - Focus on three areas: - Theory: Proving connections between local rules and global outcome - Bio: Systems bio - Robotics: Build insect inspired robots - Example 1: Structure building Termes bots, inspired by termites - General centralized construction system seems hard to build, and less robust - Tough question is how exactly do you build the robots. Question: Why build the robots? Theory or practice? Answer: Building the robots helps check that seemingly simple assumptions are actually reasonable. So back and forth between theory and practice. Plus, building robots is fun. - Information is contained in environment - what has been built so far - Termes bots can keep track of location, check that adding a block was successful, etc. - Have to build structure correctly, as well as not build things they can’t climb. - ‘Compiler’ translates structure design to set of instructions to insure these correctness properties. Question: What if one robot blocks another into a trap? Answer: Compiler will prevent this. Question: Do termites actually build complex structures? Answer: Yes very complex, with specialized areas for the brood, fungus growing, gas exchange, temperature control. - Example 2: Kilobots - Can we actually build and coordinate 1000 robots? - Challenge: some robots are always slower than others (systematic variation), have more faults, etc. Question: Can we make use of systematic variation between robots? What algorithms deal well with it? Answer: Collective transport did well with systematic variation. Other algorithms totally fall apart. A good thing to look into more. Amos Korman: Confidence sharing: an economic strategy for efficient information flows in animal groups - Background: Interested in how groups sense their environment , including with social cues. How do they deal with noisy communication? - Passive Information: Look at other agent’s behavior and learn something from it. - Active Communication: Two agents actively interact to exchange some knowledge. - This result: Studies importance of passing a confidence parameter when exchanging noisy information Examples of this strategy in nature: - animals (e.g. fish) respond more to social information as individual certainty drops - crickets fight and build confidence in their strength, which they can convey to future rivals - honey bee dance - the more times a bee dances, the more confident they are in the quality of the nest they found - ant communication by bumping, with higher energy bumps leading to higher probability of response - Informal description of the problem being studied: - Want to estimate some parameter r. - Each agent has an initial unreliable opinion drawn from some distribution D. Agent’s know the shape of the D. - Agent’s meet in pairs randomly (some details on model here - see paper). - Can observe other’s estimates (noisy, passive communication) - Can communicate auxiliary ‘active information’ - Want every agent to eventually have an unbiased estimate of r with minimal variance. That is want x_a(t), the estimate of agent a at time t to minimize the variance over all possible estimates they could make at time t. - Hypothetical ‘Optimal Algorithm’ - When two agents meet, they exchange all information on what they have seen so far. At time t, agents take all this collected information into account to make their predictions. This clearly beats all other algorithms for the problem. - The variance of their estimates is lower bounded by 1/J_a^r(T), where J_a^r(T) is the Fischer Information of the communication they have received, with respect to r. - Informally, the communication an agent has received is a random variable. The Fischer information measures how much information this random variable gives in determining r. - The Cramer-Rao bound connects Fischer information with the minimum possible variance of an agent’s estimate. - Simpler, more Biologically plausible algorithm: - When two agents meet, communicate confidence parameter about current estimate of r. - Update estimate according to simple weighted averaging rule. - Upshot: This algorithm gets with in a constant factor of the minimum possible variance achieved by the ‘optimal algorithm’. For gaussian noise, it is completely optimal. - Shows that complex ‘full information’ algorithm is not necessary - it can be matched by simple confidence sharing. Indicates power of confidence sharing approach.