[Home][Personal]
[Research Interest] [Projects][Codes][Links][Research Events of Interest]
Research Interest
 
I do research on perception, including computer vision, for outdoor mobile-robots, aiming that the resulting mechanisms enable robots to better understand their operating conditions. I'm also interested in developing a robot learning framework that takes advantage of two aspects of human learning process; cumulative (or life-long) learning by means of stochastic approximation and optimal representation of previous experiences as reusable a priori knowledge.
 
Projects
Detection and Tracking of Stop-Lines

   To be successfully deployed in real world driving environments, self-driving cars should be capable of complying with the traffic rules, i.e., understanding rules in place and executing its driving maneuvers as dictated. For example, an autonomous vehicle should be able to recognize stop-lines and stop at the detected stop-line. Such a capability is a crucial one that self-driving cars must acquire, to drive along with manually-driven cars. To develop such a perception capability, this work aims at developing a computer vision algorithm that detects, through an analysis of the detected lane-markings' geometric layout, stop-lines and tracks, using an unscented Kalman filter, the detected stop-line over time. To detect lateral and longitudinal lane-markings, our method applies a spatial filter emphasizing the intensity contrast between lane-marking pixels and their neighboring pixels. We then examine the detected lane-markings to identify perpendicular, geometry layouts between longitudinal and lateral lane-markings for stop-line detection. To provide reliable stop-line recognition, we developed an unscented Kalman filter to track the detected stop-line over frames. Through the testings with real-world, busy urban street videos, our method demonstrated promising results, in terms of the accuracy of the initial detection accuracy and the reliability of the tracking.

Young-Woo Seo and Raj Rajkumar, A vision system for detection and tracking of stop-lines, In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC-14), pp. 1970-1975, Qingdao, China, Oct 8-11, 2014.

Understanding Computational Workload of Self-Driving Car

   To enable a vehicle to drive autonomously, a computing unit of a self-driving car must be capable of running many software tasks (e.g., motion-planning, moving object detection, etc.) in parallel. As those tasks are computationally intensive and the capacity of a computing unit is not unbounded, it is important to efficiently utilize such limited computational resources. To this end, this study aims at developing a method that predicts the CPU usage patterns of software tasks running on a self-driving car. To ensure safety of such dynamic systems, the worst-case-based CPU utilization analysis has been used; however, the nature of dynamically changing driving contexts requires more flexible approach for an efficient computing resource management. To better understand the dynamic CPU usage patterns, this study presents an effort of designing a feature vector to represent the information of driving environments and of predicting, using regression methods, the selected tasks' CPU usage patterns given specific driving contexts. Experiments with real-world vehicle data show a promising result and validate the usefulness of the proposed methods.

Young-Woo Seo, Junsung Kim, and Raj Rajkumar, Predicting dynamic computational workload of a self-driving car, In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC-2014), pp. 3030-3035, San Diego, CA, Oct 5-8, 2014. [10.1109/SMC.2014.6974391]

Junsung Kim, Young-Woo Seo, Hyoseung Kim, and Raj Rajkumar, Can cyber-physical systems to be predictable? inferring cyber-workloads from physical attributes, In Proceedings of the ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS-2014), 2014. [10.1109/ICCPS.2014.6843732]

Detection and Tracking of Boundary of Drivable Regions in Road with No Lane-markings

   This study aims at developing a computer vision algorithm that detects and tracks the boundaries of drivable regions appearing on input images. The roads we are particularly interested in are paved roads with no lane-markings, like ones connecting public roads to residential areas. Self-driving cars, to be truly useful, should be able to drive on such roads to demonstrate autonomous driving maneuvers from a house to a destination. To provide such a capability, we first develop a perception algorithm that detects, using ground intensity patterns learned on-the-fly, the boundaries of drivable regions from the input images and tracks, using a Bayes filter, the detected boundaries. Specifically, to detect the left and right boundaries of drivable regions, our method samples the image region at the front of ego-vehicle and uses the appearance information of that region to identify the boundary of the drivable region in the input images. Due to variation in the image acquisition condition, the image features necessary for boundary detection may not be present. When this happens, a boundary detection algorithm working frame-by-frame basis would fail to successfully detect the boundaries. To effectively handle these cases, our method tracks, using an unscented Kalman filter, the detected boundaries over frames. Experiments using real-world video data show promising results.

Young-Woo Seo and Raj Rajkumar, Detection and tracking of boundary of unmarked roads, In Proceedings of the 17th International Conference on Information Fusion (Fusion-2014), Salamanca, Spain, 2014. [pdf|IEEE Xplore]

Estimation and Tracking of Ego-Vehicle State for Lateral Localization

   For safe urban driving, keeping a car within a road-lane boundary is a critical prerequisite. It requires human and robotic drivers to recognize the boundary of a road-lane and the vehicle's location with respect to the boundary of a road-lane that the vehicle happens to be driving on. To provide such a perception capability, we develop a new computer vision system that analyzes a stream of perspective images to produce information about a vehicle's location relative to the a road-lane's boundary, and information about the detecting of lane-crossing and lane-changing maneuvers. To assist the vehicle's lateral localization, our algorithm also estimates the host road-lane's geometry, including the number of road-lanes, their widths, and the index of the host road-lane. The local road geometry estimated by frame-by-frame may be inconsistent over frames, due to variations in the image features. To handle such inconsistent estimations, we implement an Unscented Kalman Filter (UKF) to smooth out, over time, the estimated road geometry. Tests on inter-city highway showed that our system provides stable and reliable performance in terms of computing lateral distances and detecting lane-crossing and lane-changing maneuvers.

Young-Woo Seo and Myung Hwangbo, A computer vision system for lateral localization, Journal of Field Robotics, ROB-14-0131, in press, 2015.

Young-Woo Seo and Raj Rajkumar, Tracking and estimation of ego-vehicle's state for lateral localization, In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC-2014), pp. 1251-1257, Qingdao, China, Oct 8-11, 2014.

Young-Woo Seo and Raj Rajkumar, Utilizing instantaneous driving direction for enhancing lane-marking detection, In Proceedings of the 25th IEEE Intelligent Vehicles Symposium (IV-2014), pp. 170-175, Dearborn, MI, 2014. [10.1109/IVS.2014.6856467]

Young-Woo Seo and Raj Rajkumar, Use of a monocular camera to analyze a ground vehicle's lateral movements for reliable autonomous city driving, In Proceedings of the 5th IEEE IROS Workshop on Planning, Perception and Navigation for Intelligent Vehicles (PPNIV-2013), pp. 197-203, Nov 3-8, Tokyo, Japan, 2013. [pdf]

   
Detection and Tracking of the Vanishing Point on a Horizon

   In advanced driver assistance systems and self-driving cars, many computer vision applications rely on knowing the location of the vanishing point on a horizon. The horizontal vanishing point's location provides important information about driving environments, such as the instantaneous driving direction of roadway, sampling regions of the drivable regions' image features, and the search direction of moving objects. To detect the vanishing point, many existing methods work frame-by-frame. Their outputs may look optimal in that frame. Over a series of frames, however, the detected locations are inconsistent, yielding unreliable information about roadway structure. This work studys a novel algorithm that, using lines, detects vanishing points in urban scenes and, using Extended Kalman Filter (EKF), tracks them over frames to smooth out the trajectory of the horizontal vanishing point. The study demonstrates both the practicality of the detection method and the effectiveness of our tracking method, through experiments carried out using thousands of urban scene images.

Young-Woo Seo and Raj Rajkumar, Detection and tracking of the vanishing point on a horizon for automotive applications, In Proceedings of the 6th IEEE IROS Workshop on Planning, Perception and Navigation for Intelligent Vehicles (PPNIV-2014), Chicago, Sep 14-18, 2014. [pdf]

Detection and Tracking Moving Objects in Urban Driving Environments

   As a self-driving car is intended to interact with other road occupants (e.g., cars, pedestrians, bicyclists, etc.), knowledge of these objects is a critical piece of information that a self-driving car must reliably and timely maintain. This work aims at developing a new system for detecting and tracking of moving objects that extends and improves the capabilities of our earlier system used for the 2007 DARPA Urban Challenge. In particular, we revised our earlier motion and observation models for active sensors and incorporated measurements from a vision sensor. The vision module detects pedestrians, bicyclists, and vehicles to generate corresponding vision targets. Our new system exploits these visual detection results to improve our earlier methods on tracking model selection, data association, and movement classification. With the data log of actual city and inter-city drivings, we demonstrated the improvement and performance gain of our new system.

Hyunggi Cho, Young-Woo Seo, B.V.K. Vijaya Kumar, and Raj Rajkumar, A multi-sensor fusion system for moving object detection and tracking in urban driving environments, In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA-2014), pp. 1836-1843, Hong Kong, China, 2014. [10.1109/ICRA.2014.6907100]

Generating Omni-Directional View of Neighboring Objects for Ensuring Safe Urban Driving

   To safely drive on urban streets, it is critical for self-driving cars to timely obtain the locations of other road occupants (e.g., cars, pedestrians, bicyclists, etc.). If such information is unreliably estimated, it would put a self-driving car in a great risk. To provide our self-driving car with such a capability, this work develops a perception algorithm that generates, by combining scan points from multiple, automotive grade LIDARs, temporally consistent and spatially seamless snapshots of neighboring (dynamic and static) objects. To do so, the proposed algorithm first represents a square region centered at the current location of ego-vehicle and then traces, for each of the LIDAR scans, a virtual ray between a LIDAR and the edge of reliable sensing range, to update cells on the ray. Through the tests with several urban streets driving data, the proposed algorithm showed promising results in terms of clearly identifying traversable regions in the drivable regions.

Young-Woo Seo, Generating omni-directional view of neighboring objects for ensuring safe urban driving, Tech. Report CMU-RI-TR-14-11, the Robotics Institute, Carnegie Mellon University, June, 2014. [link]

Recognition of Highway Workzones for Reliable Autonomous Driving

   In order to be deployed in real-world driving environments, autonomous ground vehicles must be able to recognize and respond to exceptional road conditions, such as highway workzones, because such unusual events can alter previously known traffic rules and road geometry. In this work, we investigate a set of computer vision methods which recognize, through identification of workzone signs, the bounds of a highway workzone and temporary changes in highway driving environments. Our approach filters out irrelevant image regions, localizes potential sign image regions using a learned color model, and recognizes signs through classification. Performance of individual unit tests is promising; still, it is unrealistic to expect perfect performance in sign recognition. Performance errors with individual modules in sign recognition will cause our system to misread temporary highway changes. To handle potential recognition errors, our method utilizes the temporal redundancy of sign occurrences and their corresponding classification decisions. Through testing, using video data recorded under various weather conditions, our approach was able to reliably identify the boundaries of workzones and robustly detect a majority of driving condition changes. [project page]

 

Young-Woo Seo, Jongho Lee, David Wettergreen, and Wende Zhang, Recognition of highway workzones for reliable autonomous driving, IEEE Transactions on Intelligent Transportation Systems, T-ITS-14-01-0020, in press, 2014. [10.1109/TITS.2014.2335535]

Jongho Lee, Young-Woo Seo and David Wettergreen, Kernel-based tracking for improving sign detection performance, In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2013), pp. 4388-4393, Nov 3-8, Tokyo, Japan, 2013. [10.1109/IROS.2013.6696986]

Jongho Lee, Young-Woo Seo, David Wettergreen, and Wende Zhang, Kernel-based traffic sign tracking to improve highway workzone recognition for reliable autonomous driving, In Proceedings of the IEEE International Conference on Intelligent Transportation Systems (ITSC-2013), pp. 1131-1136, Oct 6-9, Hague, Netherlands, 2013. [10.1109/ITSC.2013.6728384]

Young-Woo Seo, David Wettergreen, and Wende Zhang, Recognizing temporary changes on highways for reliable autonomous driving, In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC-2012), pp. 3021-3026, Seoul, Korea, 2012. [10.1109/ICSMC.2012.6378255] (the finalist of the best student paper award)

Ortho-Image Analysis for Building Maps for Autonomous Driving

   Maps are important for both human and robot navigation. Given a route, driving assistance systems consult maps to guide human drivers to their destinations. Similarly, topological maps of a road network provide a robotic vehicle with information about where it can drive and what driving behaviors it should use. By providing the necessary information about the driving environment, maps simplify both manual and autonomous driving. The majority of existing cartographic databases are built, using manual surveys and operator interactions, to primarily assist human navigation. Hence, the resolution of existing maps is insufficient for use in robotics applications. Also, the coverage of these maps fails to extend to places where robotics applications require detailed geometric information. To augment the resolution and coverage of existing maps, this work investigates computer vision algorithms to automatically build lane-level detailed maps of highways and parking lots by analyzing publicly available cartographic resources such as orthoimagery [project page].

Young-Woo Seo, David Wettergreen, and Chris Urmson, Exploiting publicly available cartographic resources for aerial image analysis, In Proceedings of the ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS-2012), pp. 109-118, Redondo Beach, CA, 2012. [10.1145/2424321.2424336]

Young-Woo Seo, David Wettergreen, and Chris Urmson, Ortho-image analysis for producing lane-level highway maps, In Proceedings of the ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS-2012), pp. 506-509, Redondo Beach, CA, 2012. [10.1145/2424321.2424401 |10-page version pdf | ri-tech-report]

Young-Woo Seo, Chris Urmson, David Wettergreen, and Jin-Woo Lee, Building lane-graphs for autonomous parking, In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2010), pp. 6052-6057, Taipei, Taiwan, 2010. [10.1109/IROS.2010.5650331]

Young-Woo Seo, Chris Urmson, David Wettergreen, and Jin-Woo Lee, Augmenting cartographic resources for autonomous driving, In Proceedings of the ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS-2009), pp. 13-22, Seattle, WA, November, 2009. [10.1145/1653771.1653777]

Young-Woo Seo and Chris Urmson, Utilizing prior information to enhance self-supervised aerial image analysis for extracting parking lot structures, In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2009), pp. 339-344, St. Louis, MO, October, 2009. [10.1109/IROS.2009.5354405]

Young-Woo Seo, Nathan Ratliff, and Chris Urmson, Self-supervised aerial image analysis for extracting parking lot structure, In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-2009), pp. 1837-1842, Pasadena, CA, July, 2009. [pdf]

   
Use of a Monocular Vision Sensor for Estimating Depth to Find Drivable Regions

 

   This work aims at developing a computer vision algorithm that provides a mobile robot with depth-estimated still images and enables the robot to navigate its environment with only a monocular camera. The task is comprised of four sub-tasks: collecting environment-specific data, estimating depth from the collected data,learning the mapping between depths and image characteristics, and generating a set of vertical stripes for steering direction [documentation].

   
Tartan Racing

   In the 2007 DARPA Urban Challenge, fully-autonomous ground vehicles will conduct simulated military supply missions in a mock urban area. Robotic vehicles will attempt to complete a 60-mile course through traffic in less than six hours, operating solely under their own computer-based control. To succeed, vehicles must obey traffic laws while safely merging into moving traffic, navigating traffic circles, negotiating busy intersections, and avoiding obstacles [RI project page][official project page].

Young-Woo Seo and Chris Urmson, A perception mechanism for supporting autonomous intersection handling in urban driving, In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2008), pp. 1830-1835, Nice, France, September, 2008. [10.1109/IROS.2008.4651162]

Chris Urmson, Joshua Anhalt, Drew Bagnell, Christopher Baker, Robert Bittner, M. N. Clark, John Dolan, Dave Duggins, Tugrul Galatali, Chris Geyer, Michele Gittleman, Sam Harbaugh, Martial Hebert, Thomas M. Howard, Sascha Kolski, Alonzo Kelly, Maxim Likhachev, Matt McNaughton, Nick Miller, Kevin Peterson, Brian Pilnick, Raj Rajkumar, Paul Rybski, Bryan Salesky, Young-Woo Seo, Sanjiv Singh, Jarrod Snider, Anthony Stentz, William Red Whittaker, Ziv Wolkowicki, Jason Ziglar, Hong Bae, Thomas Brown, Daniel Demitrish, Bakhtiar Litkouhi, Jim Nickolaou, Varsha Sadekar, Wende Zhang, Joshua Struble, Michael Taylor, Michael Darms, and Dave Ferguson, Autonomous driving in urban environments: Boss and the urban challenge, Journal of Field Robotics: Special Issue on the 2007 DARPA Urban Challenge, Part I, pp. 425-466, 2008.

Chris Urmson, Joshua Anhalt, Drew Bagnell, Christopher Baker, Robert Bittner, John Dolan, Dave Duggins, Dave Ferguson, Tugrul Galatali, Hartmut Geyer, Michele Gittleman, Sam Harbaugh, Martial Hebert, Thomas M. Howard, Alonzo Kelly, David Kohanbash, Maxim Likhachev, Nick Miller, Kevin Peterson, Raj Rajkumar, Paul Rybski, Bryan Salesky, Sebastian Scherer, Young-Woo Seo, Reid Simmons, Sanjiv Singh, Jarrod Snider, Anthony Stentz, William Red Whittaker, and Jason Ziglar, Tartan racing: a multi-modal approach to the DARPA Urban Challenge, Tech Report, the Robotics Institute, Carnegie Mellon University, 2007.

 

Where is the "BOSS"? A Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial LIDAR MAP

   Most of the current outdoor localization methods heavily rely on pose estimation in the form of GPS and inertial measurement. However, GPS technology is limited in accuracy and depends on unobstructed views of the sky, and inertial measurement systems tolerant of outdoor driving conditions are very expensive. We attempt to localize the pose of a robotic ground vehicle using only noisy vehicle speed estimates and a 3D laser scanner. In addition, whereas most localization systems use maps generated by other ground sensors, we use a map generated from aerial lidar [documentation].

 
   
A Multi-Agent System for Enforcing "Need-To-Know" Security Policies
 

   The "Need-to-know" authorization is that grants access to confidential information only if that information is necessary for the requester's task or project. Here, confidential information refers to property containing knowledge that is sensitive to an individual or organizations; hence its careless release may have a damaging effect. We devised a multi-agent system architecture for the adaptive authorization of access to confidential information. The developed system provides "need-to-know" content-based authorization of requests to access confidential information that extend the protections offered by security mechanisms such as access control lists (ACLs). We treat the authorization task as a text classification problem in which the classifier must learn a human supervisor's decision criteria with small amounts of labeled information, e.g. 20 to 30 textual documents, and to be capable of generalizing to other documents with a near-zero false alarm rate. Since "need-to-know" authorizations must be determined for multiple tasks, multiple users, and multiple collections of confidential information, with quick turn-around from definition to use, the authorization agent must be adaptive and capable of learning new profiles quickly and with little impact on the productivity of the human supervisor and the human end-user. When a request for confidential information occurs, the authorization agent compares the content of the requested information to the description of the requester's project. The request is approved for access if the requester's project is determined to be "relevant" to the requested item. An unauthorized attempt to acceess a unit of confidential information that the requester does not "need to know" is undoubtly rejected because the requester's project description is not at all similar to that information. To this end, we examined five different text classification methods for solving this problem, ``agentified" the best performer, and inserted it in a secure document management system context. This work is significant in that it enables a human supervisor to conveniently and cost-effectively identify arbitrary subsets of confidential information and to associate security policies to it. The multi-agent system, by integrating with a secure document management system, enables the automatic enforcement of such security policies, as well as tracks authorized and unauthorized attempts to access the confidential information.

Young-Woo Seo and Katia Sycara, Addressing insider threat through cost-sensitive document classification, Terrorism Informatics, pp. 451-472, Springer, 2008. [10.1007/978-0-387-71613-8_21]

Young-Woo Seo and Katia Sycara, Cost-sensitive access control for illegitimate confidential access by insiders, In Proceedings of the IEEE Intelligence and Security Informatics Conference (ISI-2006), pp. 117-128, San Diego, May, 2006 (awarded the "best paper honorable mention"). [10.1007/11760146_11]

Young-Woo Seo, Joseph Giampapa, and Katia Sycara, A multi-agent system for enforcing "Need-To-Know" security policies, In Proceedings of the International Conference on Autonomous Agents and Multi Agent Systems (AAMAS) Workshop on Agent Oriented Information Systems (AOIS), pp. 179-163, New York, New York, July, 2004.

   
AFOSR PRET: Information Fusion for Command and Control

   This work aims at developing a method that helps a software agent discern a set of relevant and essential information from all the available information sources. The resulting method enables software agent to accomplish a given task, on time, by effectively utilizing those identified set of information. Furthermore, it could be very useful to handle properly the problem of "data overload, information starvation." For example, such provision will help a human decision maker draw a timely conclusion with less uncertainty. As a preliminary work, I had investigated the related literature such as the study of Link Analysis, Social Network Analysis, and modeling of trust/reputation in the multi-agent systems. As a result of this work, a new model for estimating reliablity (or trust) of information provided by agents in multi-agents community is developed. This model provides an agent with a method that helps to determine which agents in the same community are trustworthy so that it can accomplish its tasks efficiently by collaborating trustworthy agents. Following those notions of human intuitively, the trustworthiness of an agent is estimated by linearly combining two factors: truster's direct experiences and the statement of target agent's reputations from other agents. [project page]

Young-Woo Seo and Katia Sycara, Exploiting multi-agent interactions for identifying the best-payoff information source, In Proceedings of the IEEE/ACM Conference on Intelligent Agent Technology (IAT-2005), pp. 344-350, Compiegne, France, September, 2005. [10.1109/IAT.2005.75]

Joseph Giampapa, Katia Sycara, Sean Owens, Robin Glinton, Young-Woo Seo, Bin Yu, Chuck Grindle, Yang Xu, and Mike Lewis, An agent-based C4ISR testbed, In Proceedings of the International Conference on Information Fusion (Fusion-2005), Philadelphia, PA, July, 2005. [10.1109/ICIF.2005.1592030]

Joseph Giampapa, Katia Sycara, Sean Owens, Robin Glinton, Young-Woo Seo, Bin Yu, Chuck Grindle, and Mike Lewis, Extending the OneSAF testbed into a C4ISR testbed, Simulation: Special Issue on Military Simulation Systems and Command and Control Systems Interoperability, Vol. 80, No. 12, pp. 681-691, 2004. [10.1177/0037549704050348]

   
TextMiner: Mining Knowledge from Ubiquitous and Unstructured Text

   TextMiner is one of the results from our text learning research. Text Learning, which is also called Text Mining, refers to the application of machine learning (or data mining) techniques to the study of Information Retrieval and Natural Language Processing. Loosely speaking, it is defined as the way of discovering knowledge from ubiquitous text data which are easily accessible over the Internet or the Intranet. I believe that the study of text learning is another way of understanding natural language which is one of the primary media for human to communicate with each other. The study of this field is comprised of various sub-fields: text classification, clustering, summarization, extraction, and others. So far, our research has been done on two fields: classification and clustering. Conceptually, TextMiner consists of 4 different layers: User-Interface, Task, Learning Model, and Pre-processing. [project page]

Young-Woo Seo, Anupriya Ankolekar, and Katia Sycara, Feature selections for extracting semantically rich words for ontology learning, In Proceedings of Dagstuhl Seminar Machine Learning for the Semantic Web, February, 2005.

Young-Woo Seo and Katia Sycara, Text clustering for topic detection, Tech Report CMU-RI-TR-04-03, the Robotics Institute, Carnegie Mellon University, 2004.

Anupriya Ankolekar, Young-Woo Seo, and Katia P. Sycara, Investigating semantic knowledge for text learning, In Proceedings of the ACM SIGIR-2003 Workshop on Semantic Web, pp. 9-17, Toronto, Canada, July, 2003.

   
Warren: A Multi-agent System for Assisting Users To Monitor and Manage Their Financial Portfolio

   The WARREN system, an application of the RETSINA multi-agent architecture, deploys a number of different, autonomous software agents that acquire information from and monitor changes to stock reporting databases. These agents also interpret stock information, suggest the near future of an investment, and track and filter relevant financial news articles for the user to read [project page].

Young-Woo Seo, Joseph Giampapa, and Katia Sycara, Financial news analysis for intelligent portfolio management, Tech Report CMU-RI-TR-04-04, the Robotics Institute, Carnegie Mellon University, May 2002.

Young-Woo Seo, Joseph Giampapa, and Katia Sycara, Text classification for intelligent portfolio management, Tech Report CMU-RI-TR-02-14, the Robotics Institute, Carnegie Mellon University, May 2002.

Personalized Information Filtering
 

Byoung-Tak Zhang and Young-Woo Seo, Personalized web-document filtering using reinforcement learning, Applied Artificial Intelligence, Vol. 15, No. 7, pp. 665-685, 2001.

Young-Woo Seo and Byoung-Tak Zhang, Learning user's preferences by analyzing web browsing behaviors, In Proceedings of the ACM International Conference on Autonomous Agents (Agents-2000), pp. 381-387, Barcelona, Spain, 2000. [10.1145/336595.337546]

Young-Woo Seo and Byoung-Tak Zhang, A reinforcement learning agent for personalized information filtering, In Proceedings of the ACM International Conference on Intelligent User Interface (IUI-2000), pp. 248-251, New Orleans, LA, January, 2000. [10.1145/325737.325859]

 Codes
Top 
 
  [OpenCV] This source code demonstrates how, using the Eigen analysis, to fit a line segment to a pixel blob [lineFittingTest.cpp]
  [OpenCV] This source code demonstrates how, using a recursion, to implement the connected component grouping algorithm [connectedComponentGroupingTest.cpp]
 
 Links
Top 
 Publication Source
   Journal, Conference and Workshop | IEEE conference proceedings browse (Intelligent Systems Conference) | ACM Conferences
   Conference rankings by ANU and NICTA | Statistics on conference acceptance rate
   CFPs on [robotics | AI | data mining | machine learning | vision] | Vision related conference list by IRIS USC
 
 Periodicals of interest
   IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) | Knowledge and Data Engineering (TKDE) | Robotics
   IEEE Intelligent Systems | Robotics and Automation Magazine
   AAAI AI Magazine | AI toons | The Future of AI
   Journal of Field Robotics | International Journal of Robotics Research (IJRR)
   Elsevier Robotics and Autonomous Systems
 
Research Events of Interest (Past Events)
Top 
Title
Submission Due
Event Dates
Location
FSR-13 Sep 11, 2013 Dec 9-11, 2013 Brisbane, Australia
ICRA-14 Sep 15, 2013 May 31-Jun 5, 2014 Hong Kong, China
CVPR-14 Nov 1, 2013 Jun 24-27, 2014 Columbus, OH
IEEE ISSNIP-14 Nov 17/24, 2013 Apr 21-24, 2014 Singapore
ICPR-14 Dec 20, 2013 Aug 24-28, 2014 Stockholm, Sweden
RSS-14 Jan 30, 2014 Jul 12-16, 2014 Berkeley, CA
IV-14 Jan 31, 2014 Jun 8-11, 2014 Dearborn, MI
ICIP-14 Jan 31, 2014 Oct 27-30, 2014 Paris, France
AAAI-14 Jan 31/Feb 4, 2014 Jul 27-31, 2014 Quebec City, Canada
ICML-14 Jan 31, 2014 Jun 21-26, 2014 Beijing, China
IROS-14 Feb 5, 2014 Sep 14-18, 2014 Chicago, IL
KDD-14 Feb 13/21, 2014 Aug 24-27, 2014 New York, NY
ICIF Mar 23, 2014 Jul 7-10, 2014 Salamanca, Spain
ECCV-14 Mar 7, 2014 Sep 6-12, 2014 Zurich, Switzerland
BMVC-14 May 9, 2014 Sep 1-5, 2014 Nottingham, England
ITSC-14 Jun 22, 2014 Oct 8-11, 2014 Qingdao, China
ACM GIS-14 Jun 17/24, 2014 Nov 4-7, 2014 Dallas, TX

Last modified: Nov 15, 2013