My Research Projects

Computer Vision for the Visually Impaired


Visual Accessibility Through Computer Vision

This is Joint work with Karl Hellstern, Zijun Wei, Yaser Sheikh, and Takeo Kanade.

I am working on algorithms to improve the BrainPort vision device which is manufactured by Wicab. With this device blind users can perceive the approximate size, shape and location of objects in their surrounding. Visual information is gathered by a camera that is mounted on a pair of sunglasses. It is then translated into electric pulses that are translated to the surface of the tongue. We have designed a system that allows users of the BrainPort device to recognize faces. The system detects faces in the image captured by the camera, compares them with a "prototypical" or "average" face, and produces a difference map which allows the user to literally feel what is unique about the face of the person in front of it. A prototype of our system was depicted in an episode of the BBC show Frontline Medicine.

We have also created an Android app that can detect various signs (such as restroom, EXIT, etc') and guide the visually impaired user to the location of the sign using vibrating signals.

Persistent Particle-Filters For Background Subtraction

M.Sc. Thesis

I did my thesis under the supervision of Prof. Shmuel Peleg.

Particle-Filters Segmentation Image
    Moving objects are usually detected by measuring the appearance change from a background model. The background model should adapt to slow changes such as illumination, but detect faster changes caused by moving objects. Particle Filters do an excellent task in modeling non parametric distributions as needed for a background model, but may adapt too quickly to the foreground objects.
    A persistent particle filter is proposed, following bacterial persistence. Bacterial persistence is linked to the random switch of bacteria between two states: A normal growing cell and a dormant but persistent cell. The dormant cells can survive stress such as antibiotics. When a dormant cell switches to a normal status after the stress is over, bacterial growth continues.
    Similar to bacteria, particles will switch between dormant and active states, where dormant particles will not adapt to the changing envi- ronment. A further modification of particle filters allows discontinuous jumps into new parameters enabling foreground objects to join the back- ground when they stop moving. This can also quickly build multi-modal distributions.


HANS - HUJI's Autonomous Navigation System

An autonomous mobile robot capable of navigating the Givat-Ram campus.

HANS I worked on HANS alongside Keren Haas and Dror Shalev as a final project for the Computer Engineering program. HANS has won the Computer Engineering School "Best computer engineering project" award on July 30th 2008. We finished the work on September of 2008. Our advisors were Prof. Jeff Rosenschein, Nir Pochter, and Zinovi Rabinovich.

Abstract: Building autonomous robots that can operate in various scenarios has long been a goal of research in Artificial Intelligence. Recent progress has been made on space exploration systems, and systems for autonomous driving. In line with this work, we present HANS, an autonomous mobile robot that navigates the Givat Ram campus of the Hebrew University. We have constructed a wheel-based platform that holds various sensors. Our system's hardware includes a GPS, compass, and digital wheel encoders for localizing the robot within the campus area. Sonar is used for fast obstacle avoidance. It also employs a video camera for vision-based path detection. HANS' software uses a wide variety of probabilistic methods and machine learning techniques, ranging from particle flters for robot localization to Gaussian Mixture Models and Expectation Maximization for its vision system. Global path planning is performed on a GPS coordinate database donated by MAPA Ltd., and local path planning is implemented using the A* algorithm on a local grid map constructed by the robot's sensors.

Video segmentation of unknown, static background using min-cut

An algorithm for foreground layer extraction based on EM learning and min-cut.

Video Segmentation Image The algorithm has been developed as part of 'guided work' that took place during the spring semester of 2007 under the supervision of Prof. Shmuel Peleg.

Abstract: In this report we introduce an algorithm for foreground layer extraction based on EM learning and min-cut. The background is unknown but assumed to be static, and the foreground is therefore defined as the dynamic part of the frame. From a single video stream our algorithm uses color cues as well as information from image contrast, that is, the color differences between adjacent pixels, to cut out the foreground layer. Experimental results show that the accuracy is good enough for most practical uses.