Carnegie Mellon University
 Computer Science Department

AI Seminar 02/03 Schedule

Time: Tuesday 3:30-4:30pm
Place: Wean Hall 5409

Organizer: Dr. Tuomas Sandholm

Assistant: Phyllis Pomerantz (plp@cs.cmu.edu).  Unless a different person is named below, please contact Phyllis for appointments with the speakers.


Speakers' Guide | Mailing List | Past years' AI seminars: 01 00 99 98 97
Other AI-related seminars at CMU: CALD Seminar | RI Seminar | LTI Seminar | CNBC Seminar | VASC Seminar | PITT ISP Seminar


Date

Speaker

Affiliation

Topic

9/10/02

Sebastian Thrun

CMU School of Computer Science

Particle Filters in Robotics

10/15/02

Eric Horvitz

Microsoft Research

Uncertainty, Intelligence, and Interaction

10/22/02

Mark Satterthwaite

Northwestern U., Kellogg, MEDS

Dynamic Double Auctions

10/28/02

Paul Bennett

New York Stock Exchange

Current Issues in Equity Markets: A NYSE perspective. NOTE: in the CBI conference room (on the third floor of GSIA), 4-5pm.

10/29/02

Tai Sing Lee

CMU School of Computer Science

The language of neurons in the visual system

11/4/02

Ian Horswill

Northwestern U, CS

Higher-order behavior-based systems.  NOTE: 8220 Wean Hall, 11am.  For appointments with the speaker, contact Sharon Woodside(sharonw@cs.cmu.edu).

11/5/02

Andrew McLennan

Univ. Minnesota

The Asymptotic Expected Number of Nash Equilibria of Two Player Normal Form Game

11/19/02

Pedro Domingos

Univ. Washington

Learning from Networks of Examples (For appointments, contact jean@cs.cmu.edu)

11/22/02, 3:30-4:40pm, NSH 1305

Victor Lesser

UMass CS

Experience Building a Real Distributed Sensor Network

11/26/02

Craig Boutilier

U. Toronto, CS

Preference Elicitation as a Partially Observable Decision Process

12/3/02

Nicola Muscettola

NASA Ames

Planning, Execution, Life, the Solar System and Everything.  For appointments, contact Chris Downey (cdowney@cs.cmu.edu).

12/4/02,

Wean Hall 5409, 12–1pm

Nicola Muscettola

NASA Ames

Computing the Envelope for Stepwise-Constant Resource Allocations.  For appointments, contact Chris Downey (cdowney@cs.cmu.edu).

1/10, 10-10:30am, Wean 5409

Jon Kleinberg

Cornell Univ.

An Impossibility Theorem for Clustering (For appointments, contact Yiming Yang [yiming@cs.cmu.edu])

1/9-1/10

Workshop on Graph Partitioning in Vision and Machine Learning

 

Wean 5409, all with an AI seminar interest are welcome

1/23/03, 12-1pm

Michael Jordan

UC Berkeley, EECS

Machine learning and the integration of multiple data sources

2/11/03

Carl de Marcken

ITA software

The Computational Complexity of Air Travel Planning

3/4/03

Norman Sadeh

CMU School of Computer Science

Dynamic Supply Chain Management: An AI Perspective

3/11/03

Bart Selman

Cornell U., CS

Recent Advances in Fast Large-Scale Reasoning Methods

3/25/03

SPRING BREAK

 

No talk

4/8/03

David Jensen

UMass Amherst

Unique Challenges of Learning Statistical Models of Relational Data  

(For appointments, contact jean@cs.cmu.edu)

4/15/03

Padhraic Smyth

UC Irvine

Model-Based Clustering of Sequences, Curves, and Images

(For appointments, contact jlentz@cs.cmu.edu)

4/29/03

Lise Getoor

U. Maryland, CS

Learning Statistical Models from Relational Data

(For appointments, contact jean@cs.cmu.edu)

5/30/03, 2pm, Wean 5409

Paolo Traverso

Automated Reasoning Systems, IRST in Trento, Italy

Planning under Uncertainty by Model Checking

(For appointments, contact Rune M. Jensen [mailto:runej@cs.cmu.edu])

7/22/03

Anatole Gershman

Accenture Technology Labs

Can we do better than Google in exploring bio-medical knowledge?

TBA

Michael Lewicki

CMU School of Computer Science

TBA

TBA

Tom Mitchell

CMU School of Computer Science

TBA (Brain Science & Machine Learning)

TBA

Michael Kearns

Upenn CIS

TBA

TBA

Leslie Kaelbling

MIT

TBA


The AI seminars are open to the public and will be held on most Tuesdays at 3:30pm in Wean Hall, Carnegie Mellon University.  Special AI seminars can be arranged for visitors on dates other than Tuesdays. The schedule will be updated daily. To volunteer to give an AI seminar or to nominate an outside speaker, contact Dr. Tuomas Sandholm at sandholm@cs.cmu.edu.


All AI seminars will be posted to both cboards and bboards (cs & robotics).   Meanwhile, if you would like to/not to be reminded by email, please subscribe/unsubscribe ai-seminar-announcements mailing list by sending email to Michael.Lewicki@cs.cmu.edu. Simply put "Subscribe/Unsubsribe ai-seminar-announcements" in the subject of your message.

 


9/10/02 – Sebastian Thrun   Carnegie Mellon University, School of Computer Science  

 

Particle Filters in Robotics

 

This presentation will introduce the audience to am emerging body

of research on sequential Monte Carlo techniques in robotics. In

recent years, particle filters have solved several hard perceptual

robotic problems. Early successes were limited to low-dimensional

problems, such as the problem of robot localization in environments

with known maps. More recently, researchers have begun exploiting

structural properties of robotic domains that have led to successful

particle filter applications in spaces with as many as 100,000

dimensions. The presentation will discuss specific `tricks' necessary

to make these techniques work in real-world domains, and also discuss

open challenges.

 

Joint work with Michael Montemerlo, Daphne Koller, and Ben Wegbreit.

 


10/15/02 – Eric Horvitz  Microsoft Research   

 

Uncertainty, Intelligence, and Interaction

 

Nearly fifteen years ago, a renaissance in methods for representing and

reasoning under uncertainty began to influence several subdisciplines of

computer science. I will present research on harnessing explicit

representations of probability and preferences in software applications

and services, with a focus on developments in human-computer

interaction.  I will review attempts to wrestle with inescapable

uncertainties about the intentions, attention, and background knowledge

of computer users. After highlighting key ideas in the context of

several representative projects at Microsoft Research, I will discuss

longer-term research aimed at embedding representation, inference, and

learning under uncertainty more deeply into the fabric of computer

systems and services.

 

 

Eric Horvitz is a Senior Researcher at Microsoft Research, where he

manages the Adaptive Systems & Interaction group. His interests include

principles of sensing, learning, and reasoning under uncertainty, and

applications of probability and utility in computational problem

solving, information retrieval, and human-computer interaction.  He has

served as Area Editor of the Decisions, Uncertainty, and Computation

Area of the Journal of the ACM, as a member of Information Science and

Technology Board of DARPA and the Naval Research Advisory Committee, and has been elected Councilor and Fellow of the American Association for

Artificial Intelligence. He received PhD and MD degrees from Stanford

University.

 

More information is available at: http://research.microsoft.com/~horvitz


10/22/02Mark Satterthwaite  Kellogg Graduate School of Management, Department of Management and Strategy (MEDS), Northwestern University

 

Dynamic Double Auctions

 

            Economists and computer scientists have extensively studied auctions with two characteristics:

  • The auction is a one-shot game.  The seller commits to either selling the one unit of the good he has for sale at the auction or never selling it. 
  • The auction has only one-sided incomplete information.  The seller does not know the different private value each participating buyer places on the good while each seller knows the cost of the good to the seller.

Neither of these characteristics is likely to be true in practice.  An auction that fails to result in trade today is often restaged another day.  Moreover participants may anticipate this restaging.  A frequent reason that a buyer’s bid is rejected and an auction fails is that the buyer underestimates the opportunity cost that the seller places on the good, bids below that cost, and receives a rejection from the seller.

 

In this paper we present a model that both has incomplete information on both sides of the market and that allows sellers and buyers who fail to trade one day to try again the next day.  Consider a decentralized, dynamic market with an infinite horizon in which both buyers and sellers have private information concerning their own values of trading.  A large number of traders enter the market at the beginning of each period.  Each period each buyer is matched with a seller and the pair bargains using the buyer’s bid double auction.  If the buyer succeeds in purchasing the one unit of the good that the seller is offering, then both leave the market with their realized utility.  If they fail to trade, then each trader becomes discouraged with an exogenous probability d and leaves the market with zero utility.  Traders who do not become discouraged move to the next period and are anonymously rematched.  We solve for steady-state, perfect Bayesian equilibria.  As d becomes small, then the market—in effect—becomes large because each trader expects to stay in the market a long time seeking to complete a trade on favorable terms before becoming discouraged.  We derive conditions under which all equilibrium allocations, as d ® 0, converge to the competitive allocation.

 

(Joint work with Artyom Shneyerov.)

 

Mark Satterthwaite

Earl Dean Howard Professor of Managerial Economics,

Professor of Strategic Management,

Director of the General Motors Research Center for Strategy in                                                        Management.  BS 1967, Economics, California Institute of Technology; MS1969, PhD 1973, Economics, University of Wisconsin, Madison.

 


10/29/02 – Tai Sing Lee Computer Science Department & Center for the Neural Basis of Cognition, Carnegie Mellon University

 

The language of neurons in the visual system

 

To understand the fundamental computational principles

underlying visual perception and to establish a foundation for

communicating directly to visual neurons in the brain using neural

prosthetic devices, we first need to understand the neurons'

language, their representational and encoding strategies.    

In this talk, I will discuss how neurons in the early visual

system encode spatial and temporal information, how their

representational strategies adapt to the statistics of the environment,

what computational purposes do they serve, and what

underlying biophysical mechanisms and rules they follow. We are attacking

these problems by eavesdropping and decoding the conversations of the

neurons experimentally, by analyzing and simulating the transfer functions  

and the adaptive dynamics of these neurons, and by studying the statistical

regularities of patterns in the natural environment and their links to

neural representations. I will present some interesting discoveries

we have made in these endeavors.     

 

Joint work with Yuguo Yu, Brian Potetz and Rick Romero.

 

 

Dr. Tai Sing Lee received a Ph.D. in Engineering and Medical

Physics from Harvard University and the Harvard-MIT Division of

Health Science and Technology. He is

involved in experimental and theoretical research to understand

the fundamental neural and computational principles underlying

vision. He has received the McDonnell-Pew Foundation investigator

award and the NSF CAREER award for his interdisciplinary

research. He is currently an associate professor in the Computer

Science Department and in the Center for the Neural Basis of

Cognition at Carnegie Mellon University. 

 


11/4/02 – Ian Horswill  Computer Science Department, Northwestern University

 

Higher-order behavior-based systems

 

Classical artificial intelligence systems presuppose that all knowledge

is stored in a central database of logical assertions or other symbolic

representations and that reasoning consists largely of searching and

sequentially updating that database. While this model has been very

successful for disembodied reasoning systems, it is problematic for

robots. Robots are distributed systems; multiple sensory, reasoning,

and motor control processes run in parallel, often on separate

processors that are only loosely coupled with one another. Each of

these processes necessarily maintains its own separate, limited

representation of the world and task; requiring them to constantly

synchronize with the central knowledge base is probably unrealistic. I

will discuss an alternative class of architectures - tagged

behavior-based systems - that support a large subset of the capabilities

of classical AI architectures, including limited quantified inference,

forward- and backward-chaining, simple natural language question

answering and command following, reification, and computational

reflection, while allowing object representations to remain distributed

across multiple sensory and representational modalities. Although

limited, they also support extremely fast, parallel inference.

 

 

Ian Horswill is an associate professor of computer science at Northwestern University. His research focuses on integrating high-level reasoning systems with low-level sensory-motor systems on autonomous robots. He received his BSc from the University of Minnesota in 1986 and his PhD in computer science from the Massachusetts Institute of Technology in 1993.


11/5/02 – Andrew McLennan   University of Minnesota, Dept. of Economics  

 

The Asymptotic Expected Number of Nash Equilibria of Two Player Normal Form Game

 

The talk will survey results concerning the number of Nash equilibria

possessed by normal form games, emphasizing their implications for the

complexity of related computations.  McKelvey and McLennan (1997)

showed that the number of regular Nash equilibria could be as large as

the generic number of complex roots of the underlying system of

equations, as given by Bernshtein's (1975) theorem.  McLennan (2002)

gives a formula for the mean number of Nash equilibria of a normal

form game, for a particular support.  Summed over the possible

supports, this gives the mean total number of Nash equilibria.  The

present paper uses this formula to investigate the expected number of

Nash equilibria of the two player normal form game in which the two

agents have M and N pure strategies respectively.  Holding M fixed

while N becomes large, the expected number of Nash equilibria is

asympototically proportional to (ln N)^{M-1}.  The expected number of

Nash equilibria when both players have N pure strategies is the

exponential of N[B + O(\log N/N)], where B = 0.281644 is a constant.

When both players have N pure strategies, as N becomes large the mean

is dominated by equilibria which have each player assigning positive

probability to approximately 31.5915 percent of her pure strategies.

This is joint work with Johannes Berg.

Andrew McLennan received a B.A. in mathematics from the

University of Chicago in 1976.  He then studied economics at Princeton

University under the supervision of Hugo Sonnenschein, receiving a

Ph.D. in 1982.  He held assistant professorships at the University of

Toronto from 1980 to 1984 and at Cornell University from 1984 to 1987.

He was an associate professor at the University of Minnesota between

1987 and 2000, and has been a professor there since that time.  He has

held visiting appointments at the Mathematical Sciences Research

Institute, Bristol University, Caltech, and the Institute for Social

and Economic Research at Osaka University.  His main area of research

is noncooperative game theory, with an emphasis on computational

issues, but he has also written on diverse topics in mathematical

economics.


11/19/02Pedro Domingos University of Washington, Dept. of Computer Science  and Engineering

 

Learning from Networks of Examples

 

(Joint work with Matt Richardson.)

 

Most machine learning algorithms assume that examples are independent

of each other, but many (or most) domains violate this assumption.

For example, in real markets customers' buying decisions are

influenced by their friends and acquaintances, but data mining for

marketing ignores this (as does traditional economics). In this talk I

will describe how we can learn models that account for example

dependences, and use them to make better decisions. For example, in

the marketing domain we are able to pinpoint the most influential

customers, "seed" the network by marketing to them, and unleash a wave

of word of mouth. We mine these models from collaborative filtering

systems and knowledge-sharing Web sites, and show that they are

surprisingly robust to imperfect knowledge of the network. I will also

survey other applications of learning from networks of examples we are

working on, including: combining link and content information in

Google-style Web search; automatically translating between ontologies

on the Semantic Web; and predicting the evolution of scientific

communities.

 

Pedro Domingos is an assistant professor in the Department of

Computer Science and Engineering at the University of Washington. His

research interests are in artificial intelligence, machine learning

and data mining. He received a PhD in Information and Computer

Science from the University of California at Irvine, and is the author

or co-author of over 80 technical publications in the fields of

large-scale machine learning, probabilistic learning, model ensembles,

model selection, cost-sensitive learning, multistrategy learning,

adaptive user interfaces, Web search, data integration, anytime

reasoning, computer graphics, and others. He is associate editor of

JAIR, a member of the editorial board of the Machine Learning journal,

and a co-founder of the International Machine Learning Society. He is

program co-chair of KDD-2003, and has served on numerous program

committees. He has received several awards, including an NSF CAREER

award, a Fulbright scholarship, an IBM Faculty Award, and best paper

awards at KDD-98 and KDD-99.

 


11/22/02 – Victor Lesser  University of Massachusetts at Amherst, Dept. of Computer Science

 

Experience Building a Real Distributed Sensor Network

The DARPA Autonomous Negotiation Teams (ANTs) project’s goal was to explore whether distributed resource allocation was a practical approach in a real-time setting. To this end, a distributed vehicle monitoring hardware challenge problem tested was constructed by DARPA which included in its latest incarnation 20 simple radar sensors/processors connected via a low-bandwidth, channel-based RF scheme for tracking the movement of up to 4 model railroad trains. This lecture will discuss how we approached the problem and what technologies we needed to develop along the way. First, I will discuss the problem and what real-time challenges it poses. Next will be discussed the overall philosophy of how we approached the problem which includes the idea of an agent organization and satisficing behavior in all aspects of problem solving. I will then present in detail the SRTA soft-real time agent architecture which we constructed as the underlying basis on our approach, the SPAM soft real-time resource allocation protocol for allocating sensors for real-time tracking, and finally our work on organizational self-design. As part of this presentation, I will show a short movie of the system in operation.



Victor Lesser's received his Ph.D. in computer science from Stanford University in 1973. He is currently a professor of computer science at University of Massachusetts at Amherst. Prior to coming to Umass, he was a member of the research faculty of CMU. He is a founding fellow of AAAI and is considered a leading researcher in the areas of distributed AI/multi-agent systems (he is one of the founders of the field), real-time AI (he developed the concept of approximate processing and its use in design-to-time scheduling), and blackboard systems (he was the system architect for the Hearsay-II speech understanding system). Professor Lesser has been very active in helping to organize and promote the field of Multi-Agent Systems. He was General Chairman of the First International Conference on Multi-Agent Systems (1995), and was the founding president of the International Foundation for Multi-Agent Systems. He has been working on the problem of distributed coordination of intelligent agents for over twenty five years. This work, which has involved building complex applications to evaluate the effectiveness of alternative distributed control strategies and discovering new computational phenomena. He has built applications in the area of distributed situation assessment, distributed scheduling of manufacturing work cells, cooperating expert systems for crisis management, distributed planning, cooperating experts for engineering design, distributed network diagnosis and a distributed constraint satisfaction system for resource allocation in a long-haul communications network. He is especially proud of his group’s recent work as part of the ANTs program at DARPA where they have built a functioning real-time distributed vehicle monitoring system .


11/26/02Craig Boutilier  University of Toronto, Dept. of Computer Science

 

Preference Elicitation as a Partially Observable Decision Process

 

Preference elicitation is a key problem facing the deployment of

intelligent systems that make or recommend decisions on the behalf of

users.  Since suitable recommendations require knowledge of the user's

utility function, effort must be expended to elicit this information.

However, not all aspects of a utility function have the same impact on

object-level decision quality; hence determining which information to

extract from a user is itself a sequential decision problem, balancing

the amount of elicitation effort and time with decision quality. In this

talk, I formulate this problem as a partially-observable Markov decision

process (POMDP). The model is very general and can incorporate both

direct (e.g., questions about preferences) and indirect (e.g., measuring

response to a controlled environmental variable such as a Web page)

methods of assessing utilities.

 

Because of the continuous nature of the state and action spaces of this

POMDP, standard techniques cannot be used to solve it. I describe

several methods that can be used to solve this POMDP, including some

that exploit the special structure of preference elicitation to deal

with parameterized belief states and actions.  These methods can be used

with a number of different belief state representations, including

mixture models. I describe very preliminary empirical results pertaining

to the feasibility of the model and discuss many directions for future

research, including more general views of elicitation.

 

 

Craig Boutilier is an Associate Professor with the Department of

Computer Science at the University of Toronto. He received his Ph.D. in

Computer Science from the University of Toronto in 1992, and worked as

an Assistant and Associate Professor at the University of British

Columbia from 1991 until his return to Toronto in 1999.

Boutilier's research interests have spanned a wide range of topics, from

knowledge representation, belief revision, default reasoning, and

philosophical logic, to probabilistic reasoning, decision making under

uncertainty, multiagent systems, and machine learning. His current

research efforts focus on various aspects of decision making under

uncertainty: Markov decision processes, game theory and multiagent

decision processes, economic models, reinforcement learning,

probabilistic inference and preference elicitation.

 


12/03/02 – Nicola Muscettola  NASA Ames

 

Planning, Execution, Life, the Solar System and Everything 

 

One of the most exciting endeavors in the next few decades is the search

for life in the Solar System and the Universe at large. NASA is leading

this effort by designing, deploying and operating robotic systems that

will reach planets, planet moons, asteroids and comets searching for

water, organic building blocks and signs of past or present microbial

life. None of these missions will be achievable without substantial

advances in the design, implementation and validation of autonomous

control agents, capable of robustly controlling a robotic explorer in a

hostile environment with very limited or no communication with Earth. In

the first part, this talk gives an overview of planned NASA planetary

missions, with specific emphasis to Mars exploration, and describes some

of the requirements that these missions levy on autonomous control

agents. In the second part, the talk describes some of the research in

real-time autonomous agents conducted at NASA Ames aiming at addressing

these requirements. This research explores two main thrusts: 1)

exploiting planning as the core reasoning engine of an autonomous agent,

with execution simply consisting of the incorporation of sensor feedback

and external goals into a central plan database, the reactive generation

of a plan over a very short time horizon, and the direct execution of

the directives contained in the plan for the next action; and 2)

building plans with flexibility over several dimensions (principally the

temporal and the resource dimensions), so that incorporation of sensor

feedback and reaction to it can occur within the limits specified by the

plan without the need for continuous plan repair or replanning. 

 

Nicola Muscettola is Principal Scientist for Autonomy in the

Computational Science Division at the NASA Ames Research Center. He

received his Diploma di Laurea (B.S/M.S) and his Dottorato di Ricerca

(Ph.D.) from the Politecnico di Milano. From 1987 to 1993 he was

Research Associate and System Scientist at the Robotics Institute,

Carnegie Mellon University. He was the designer and principal

implementor of the Heuristic Scheduling Testbed System (HSTS) that

pioneered planning with flexible temporal constraints and state

variables in observation scheduling for the Hubble Space Telescope.

Since 1993 Dr. Muscettola has been at NASA Ames. He led the flight team

that developed the Planner/Scheduler component of the Remote Agent, the

autonomous control system that since May 1999 is the Artificial

Intelligence program that operated the farthest from Earth (over 60

million kilometers) on the New Millennium Deep Space 1 spacecraft. Dr.

Muscettola's current research interests are in real-time control agent

architectures, planning, scheduling, temporal constraint representations

and algorithms, resource analysis for flexible temporal networks.


 

12/04/02 – Nicola Muscettola  NASA Ames, 12-1pm, Wean Hall 5409.

 

Computing the Envelope for Stepwise-Constant Resource Allocations

 

A flexible schedule is a resource- and time-consistent network of

activities each consuming or producing some resource. In contrast to a

fixed-time schedule, a flexible schedule does not determine a fixed time

assignment to all events (start or end of all activities) but allows an

event time to be dynamically determined during execution, taking into

account execution-time uncertainty. However, a flexible schedule has a

potential disadvantage with respect to a fixed time schedule since it is

more difficult to compute tight bounds on the resource allocation

profiles. In this paper we describe an efficient algorithm that builds a

resource envelope, the tightest possible such bound. The algorithm is

based on transforming the temporal network of resource consuming and

producing events into a flow network with nodes equal to the events and

edges equal to the necessary predecessor links between events. A staged

maximum flow problem on the network is then used to compute the time of

occurrence and the height of each step of the resource envelope profile.

Each stage has the same worst-case computational complexity of solving a

maximum flow problem on the entire flow network. This makes this method

computationally feasible and promising for use in the inner loop of

flexible-time scheduling algorithms. 

 

Nicola Muscettola is Principal Scientist for Autonomy in the

Computational Science Division at the NASA Ames Research Center. He

received his Diploma di Laurea (B.S/M.S) and his Dottorato di Ricerca

(Ph.D.) from the Politecnico di Milano. From 1987 to 1993 he was

Research Associate and System Scientist at the Robotics Institute,

Carnegie Mellon University. He was the designer and principal

implementor of the Heuristic Scheduling Testbed System (HSTS) that

pioneered planning with flexible temporal constraints and state

variables in observation scheduling for the Hubble Space Telescope.

Since 1993 Dr. Muscettola has been at NASA Ames. He led the flight team

that developed the Planner/Scheduler component of the Remote Agent, the

autonomous control system that since May 1999 is the Artificial

Intelligence program that operated the farthest from Earth (over 60

million kilometers) on the New Millennium Deep Space 1 spacecraft. Dr.

Muscettola's current research interests are in real-time control agent

architectures, planning, scheduling, temporal constraint representations

and algorithms, resource analysis for flexible temporal networks.

 


 

1/23/03 – Michael Jordan  UC Berkeley, EECS

 

Machine learning and the integration of multiple data sources

 

Machine learning systems are increasingly being called upon to integrate

  across data sources having varying formats, semantics and degrees of

  reliability.  I describe two classes of techniques (one "generative" 

  and the other "discriminative") that aim at solving large-scale data

  integration problems.  The first class makes use of probabilistic graphical

  models, a formalism that exploits the conjoined talents of graph theory

  and probability theory to build complex models out of simpler pieces. 

  I describe graphical model algorithms that implement a general "empirical

  Bayesian" approach to data integration.  The second class is based on

  "kernel methods," an area of machine learning that makes significant use

  of convex optimization techniques.  I show how multiple kernels can be

  combined, yielding a problem that is still within the scope of convex

  optimization.  I illustrate these ideas with examples from information

  retrieval and bioinformatics.

 

 

 Michael Jordan is Professor in the Department of Electrical Engineering

  and Computer Science and the Department of Statistics at the University

  of California at Berkeley.  He received his Masters from Arizona State

  University in 1980, and earned his PhD from the University of California,

  San Diego in 1986.  He was a professor at the Massachusetts Institute of

  Technology from 1988 to 1998.  His research interests have spanned a number

  of fields, including computer science, statistics, and psychophysics.  He

  has published over 170 research papers in these fields.  He has worked on

  a variety of topics in the area of machine learning, including neural

  networks, kernel machines, and probabilistic graphical models.  In recent

  years he has focused on algorithms for approximate probabilistic inference

  in graphical models, and on applications of machine learning to problems

  in information retrieval, bioinformatics and signal processing.

 


 

2/11/03 – Carl de Marcken  ITA Software

 

The Computational Complexity of Air Travel Planning

 

At any moment there are between 2,000 and 10,000 commercial airliners in

the sky, part of a dense network that provides more than 100,000

practical paths from Boston to San Francisco area every day.

 

But the search problem faced by travel agents, that of finding a

desirable combination of flights and fares for a given passenger's trip,

is much harder than path planning.  In fact, the airlines' price

structure is so rich that finding the cheapest price for a simple

round-trip journey is in the general case undecidable.  Even if one

bounds the size of solutions to a small number of flights there may be

more than 10^20 reasonable answers to a simple travel query.

 

This talk will describe the nature of the travel planning problem from a

computer science perspective, ranging from network properties of the

flights to the practical and theoretical complexity of pricing.  This

should be of general interest to computer scientists and operations

researchers.

 

In addition, the talk will sketch some new search algorithms that are a

radical departure from the brute force methods that have hitherto

dominated the travel industry.  For example, we directly construct a

factored graphical representation of the space of answers to a travel

query that is similar to a Bayes' net or the chart produced by a

context-free parser.  Using this representation a graph of 250,000 nodes

can encode 10^30 or more travel options and efficiently support a wide

range of useful and interesting operations.

 

Demos as time permits.

 

 

BIO:

 

Carl de Marcken is Chief Scientist and co-founder of ITA Software, a company that provides the search engine behind Orbitz and various airline web sites.  Carl has a Ph.D. in Computer Science from MIT, where his research in machine learning and natural language acquisition won the distinguished award for best

computer science thesis.

 


 

3/4/03Norman Sadeh  CMU School of Computer Science

 

Dynamic Supply Chain Management: An AI Perspective

 

Today’s global economy is characterized by fast changing market demands, short product lifecycles and increasing pressures to offer high degrees of customization, while keeping costs and lead times to a minimum. In this context, the competitiveness of both manufacturing and service companies will increasingly be tied to their ability to identify promising supply chain partners in response to changing market conditions. Today, however, dynamic supply chain practices are confined to relatively simple scenarios such as those found in the context of Maintenance, Repair and Operations (MRO) procurement.

 

In this presentation, I will attempt to outline key research challenges that need to be addressed if one is to realize the potential benefits of more dynamic supply chain management practices, reflecting among other things on the lessons learned from recent e-marketplace failures. I will continue with an overview of research I’ve been conducting for the past several years on agent-based supply chain decision support environments. This includes ongoing work in collaboration with the University of Michigan on Multi-Attribute Supply Chain Negotiation.

 

If time permits, I will also say a few words about how we’ve redesigned the Trading Agent Competition (TAC), an annual competition started in 2000, to reflect some of the main research challenges involved in supporting dynamic supply chain practices. The competition’s simulation testbed (developed jointly with the Swedish Institute for Computer Science) was released for alpha testing in February. The competition itself will take place at the upcoming 18th International Joint Conference on Artificial Intelligence (IJCAI’03) to be held in Acapulco this summer.

 

 

Norman M. Sadeh is an Associate Professor at the Institute for Software Research International (ISRI) within the School of Computer Science at Carnegie Mellon University (CMU). His research interests include Supply Chain Management, Agent Technologies, Automated Negotiation, the Semantic Web, Mobile Commerce and Internet Privacy. Norman is the founder and director of CMU’s Mobile Commerce Lab and of its e-Supply Chain Management Lab.  He is also interested in the broader business, social and policy implications associated with the emerging Information Society.

 

Norman has been on the faculty at CMU since 1991. In the late nineties, he worked as program manager with the European Commission’s ESPRIT research program, prior to serving for two years as chief scientist of the Euro550M (US$600M) European research initiative in "New Methods of Work and eCommerce" within the Information Society Technologies (IST) program. Norman received his Ph.D. in Computer Science at CMU with a minor in Operations Research and Operations Management. He has authored around 100 scientific publications and serves on the editorial board of several journals, including Autonomous Agents and Multi-agent Systems (AAMAS) and Electronic Commerce Research Applications (ECRA). He is also the author of “m-Commerce: Technologies, Services and Business Models”, a book published by Wiley in April 2002, and serves as general chair of the 2003 International Conference on Electronic Commerce (ICEC’03) to be held in Pittsburgh this coming October.

 

 


 

3/11/03Bart Selman  Cornell University, Computer Science Dept.

 

Recent Advances in Fast Large-Scale Reasoning Methods

 

Just a few years ago, general propositional inference beyond hundred

variable problems appeared to be out of practical reach. Current reasoning

engines can handle problems with over a million variables and several

millions of constraints. In this talk, I will discuss what led to such

a dramatic scale-up. We will see that these advances rely on a clever

use of randomization and learning methods, which uncover hidden structure

in practical problem instances. Work in this area has benefited from

an interesting interplay between insights from statistical physics,

combinatorics, and empirical algorithm design. I'll also discuss how

progress in reasoning technology has opened up a range of new applications

in AI and computer science in general.

 

Joint work with Carla Gomes (Cornell).

 

 

Bart Selman is an associate professor of computer science at Cornell

University. His current research interests include efficient reasoning

procedures, stochastic search methods, knowledge compilation, planning, and

connections between computer science and physics. He has received four

best paper awards at the American and Canadian national artificial intelligence

conferences, and at the international conference on knowledge representation.

He received the Stephen Miles Excellence in Teaching Award, and a Cornell

Outstanding Educator Award. He holds an NSF Career Award and is an Alfred

P. Sloan Research Fellow. He is a Fellow of the American Association for

Artificial Intelligence (AAAI) and a Fellow of the American Association

for the Advancement of Science (AAAS).

 


 

4/8/03 – David Jensen  UMass Amherst, Computer Science

 

Unique Challenges of Learning Statistical Models of Relational Data  

 

Recent work in machine learning and data mining has begun to develop

methods for constructing statistical models of relational data.  Such

data can represent social networks, networks of web pages, complex

relational databases, or interrelated people, places, things, and

events extracted from text documents.

 

In recent work, we have identified several unique characteristics of

relational data that can complicate the process of learning accurate

statistical models.  These characteristics -- relational

autocorrelation and degree disparity -- each occur when the  relations

among entities somehow correlate with attribute values on those

entities.  These characteristics can cause learning algorithms to

select models that have weak statistical support from the data, include

completely spurious components in their models, to ignore very useful

components, and to mistake structural regularities for attribute

correlations.

 

These results imply that new approaches are necessary to extend current

learning algorithms to relational data.  We are developing one

particularly promising class of techniques, based on permutation tests.

  These computationally intensive statistical procedures allow us to

adjust for the unique characteristics of a given relational data set

and make accurate hypothesis tests.  We are incorporating these

approaches into algorithms for constructing relational probability

trees.  We conjecture that similar approaches will need to be

incorporated into all accurate techniques for building statistical

models from relational data.

 

 

David Jensen is Research Assistant Professor of Computer Science and

Director of the Knowledge Discovery Laboratory at the University of

Massachusetts at Amherst.  He received his Doctor of Science degree

from Washington University in 1992.  From 1991 to 1995, Dr. Jensen was

an analyst with the Office of Technology Assessment, an agency of the

United States Congress.  Dr. Jensen's current research focuses on

relational knowledge discovery, with applications in web mining,

intelligence analysis, and fraud detection.  Other research interests

include learning in multiagent systems and computing policy.

 


4/15/03Padhraic Smyth University of California, Irvine, School of Information and Computer Science

 

Model-Based Clustering of Sequences, Curves, and Images

Probabilistic model-based clustering is a useful
alternative to traditional clustering approaches in exploratory
data analysis. Typically one uses a finite mixture model as a
generative model for the observed data, and the EM algorithm for
parameter estimation. Most prior work on this topic has focused on
clustering in multivariate spaces of fixed dimension. In this talk
we review recent work on model-based clustering of non-vector
data, such as sets of categorical sequences, curves, or images.
Following a brief review of the basic principles of finite mixture
models, we discuss how mixtures can be generalized to handle
non-vector data. Several illustrative examples will be presented,
including applications in modeling of page-request sequences from
Web log data, trajectories of storms from atmospheric science, and
galaxy images from astronomy. Recent extensions to learning in the
presence of transformations (such as shifts, rotations, and so
forth) will also be discussed.


Padhraic Smyth is an associate professor in the School of
Information
and Computer Science at the University of California,
Irvine. He received a first class honors Bachelor of Engineering
degree from the National University of Ireland in 1984, and the MSEE
and PhD degrees from the Electrical Engineering Department at the
California Institute of Technology in 1985 and 1988 respectively.
He was a recipient of best paper awards at the ACM SIGKDD 2002 and
1997 conferences, an IBM Faculty Partnership Award in 2001, a
National Science Foundation Faculty CAREER award in 1997, the ACM
Teaching Award at UC Irvine in 1996, and the Lew Allen Award for
Excellence in Research at the Jet Propulsion Laboratory in 1993.
He is co-author of a graduate text in data mining, Principles of
Data Mining, MIT Press, August 2001, with David Hand and Heikki
Mannila and is also co-author of the forthcoming text (with Pierre
Baldi and Paolo Frasconi), Modeling the Internet and the Web:
Probabilistic Methods and Algorithms, John Wiley and Sons, 2003.
He is currently an associate editor for the Journal of the
American Statistical Association and the IEEE Transactions on
Knowledge and Data Engineering. His research interests are in
machine learning, statistical pattern recognition, and applied
statistics.


 

4/29/03Lise Getoor University of Maryland, College Park, Computer Science Department


Learning Statistical Models from Relational Data


(For appointments, contact jean@cs.cmu.edu)

 

A large portion of real-world data is stored in commercial relational

database systems. In contrast, most statistical learning methods work

only with "flat" data representations. Thus, to apply these methods,

we are forced to convert the data into a flat form, thereby losing

much of the relational structure present in the data and potentially

introducing statistical skew. These drawbacks severely limit the

ability of current methods to mine relational databases. In this talk

I will review recent work on probabilistic models, including Bayesian

networks (BNs) and Probabilistic Relational Models (PRMs), and then

describe the development of techniques for automatically inducing PRMs

directly from structured data stored in a relational or

object-oriented database. These algorithms provide the necessary tools

to discover patterns in structured data, and provide new techniques

for mining relational data. As we go along, I'll present experimental

results in several domains, including a biological domain describing

tuberculosis epidemiology, a database of scientific paper author and

citation information, and Web data. Finally, if time permits, I will

present an application of these techniques to the task of selectivity

estimation for database query optimization.

 

 

Lise Getoor recently joined the University of Maryland, College Park

as an assistant professor in the computer science department.  She

received her PhD from Stanford University in December, 2001.  Her

research interests include link analysis, information extraction and

information integration.  She has published papers on a variety of

topics including learning probabilistic models, utility elicitation,

on-line scheduling, constraint-based planning and machine learning.

Before pursuing her PhD at Stanford, she worked at NASA-Ames Research

Center as a research associate.  She received her M.S. in Computer

Science from UC Berkeley in 1989 and her B.S. in Computer Science from

UC Santa Barbara in 1986.  She is the recipient of a National Physical

Sciences Consortium fellowship and member of ACM, AAAI and Tau Beta

Pi.


5/30/03, 2pm, Wean 5409 – Paolo Traverso Automated Reasoning Systems, IRST in Trento, Italy


Planning under Uncertainty by Model Checking


(For appointments, contact Rune M. Jensen [runej@cs.cmu.edu])

 

   "Planning under uncertainty" is one of the most significant

   and challenging planning problems. Most realistic applications

   need to deal with uncertainty about the domain and the environment,

   to deal with incomplete knowledge at planning time and

   partial observability at execution time. The problem of planning

   under uncertainty has been shown to be hard, both theoretically

   and experimentally.  "Planning as Model Checking" is a novel

   approach to planning. We have shown that it is a general,

   well founded, and practical approach to the problem of planning

   under uncertainty. In the talk we will introduce the approach,

   and we will present some recent results in planning in

   non-deterministic domains, planning with partial information

   available at run time, and with requirements expressed in

   temporal logic.

 

 

 

   Paolo Traverso is Head of Division at the Institute for Scientific

   and Technological Research (ITC/IRST), Trento, Italy. The Automated

   Reasoning Systems Division consists of about 50 people doing research

   in Planning, Formal Methods, Case Based Reasoning, Knowledge Representation

   and Multi Agent Systems. The division is involved in several

   industrial projects, e.g., in the area of safety critical applications

   (railways, space and avionics sectors), industrial controllers,

   environmental emergency planning, knowledge management and

   distributed systems.

 

   His main research interests are in automated planning, in formal methods

   for software engineering, and in automated reasoning. He has contributed

   to research on formal and mechanized meta-theories, deductive and

   computational reflection, formal verification and validation of

   software, logics for for acting, sensing, and planning, and to

   a novel approach to planning, called "Planning as Model Checking".

 

   Paolo Traverso serves as a member of the Editorial Board of

   the Journal of Artificial Intelligence Research (JAIR),

   as a member of the Editorial Board of the European

   Transaction in Artificial Intelligence (ETAI), as a member of

   the Executive Council of the International Conference on Automated

   Planning and Scheduling (ICAPS), and as a member of the Board of

   Directors of the Italian Association for Artificial Intelligence (AI*IA).

 


7/22/03 – Anatole Gershman Accenture Technology Labs

 

Can we do better than Google in exploring bio-medical knowledge?

 

 Edy Liongosari

Anatole Gershman

Accenture Technology Labs

 

Bio-medical researchers use dozens of different, rapidly growing on-line knowledge sources each with its own structure and access methods. Successful research often depends on a researcher’s ability to discover connections among many different sources. The popularity of Google suggests that high-quality indexing would provide a uniform method of access, although it still leaves researchers with vast, undifferentiated lists of results. Hence, the research challenge: can a knowledge-based approach provide a better way for researchers to explore bio-medical knowledge and discover useful insights for their research?

 

The first goal of our research is to create a semantic index for a growing set of key sources of bio-medical knowledge. The second goal is to create intelligent navigation tools that help bio-medical researchers find useful patterns in the underlying data.

 

The current result of our research is the Knowledge Discovery Tool, or KDT, which contains a knowledge model of a number of bio-medical concepts and their relationships: from genes, proteins, biological targets and diseases to articles, researchers and research organizations.  Based on this model, the KDT index identifies over 2.5 million bio-medical entities with two billion relationships among those entities spanning 15 different knowledge sources. Clearly, the creation and maintenance of such an index cannot be done manually.  KDT utilizes an extensive set of rules that cleanse, analyze and integrate data to create a uniform index.

 

Using its index, KDT presents the user with a uniform graphical browsing space integrating all underlying knowledge sources. This space is “warped” and filtered based on domain-specific rules customized for the needs of various groups of users, such as pharmaceutical researchers, clinicians, etc. Another customized set of rules discovers and graphically highlights potential indirect relationships among various entities that might be worth exploring (e.g., relationships between genes or between diseases). Finally, the tool enables several modes of collaboration among its users from annotations to activities tracking.

 

Currently, KDT is undergoing testing in two pilot settings: an early stage of the drug discovery process in a pharmaceutical company and a bio-medial academic research group. The presentation will include a demonstration of the tool and discuss theoretical and practical issues in bio-medical knowledge discovery.

 

 

Anatole Gershman is Director of Research at Accenture Technology Labs. Under his leadership, research at the laboratories is focusing on early identification of potential business opportunities and the design of innovative applications for the home, commerce and work place of the future. The laboratories are conducting research in the areas of ubiquitous computing,  interactive multimedia, information access and visualization, intelligent agents, and simulation and modeling.

 

Prior to joining Accenture in 1989, Anatole spent over 15 years conducting research and building commercial systems based on Artificial Intelligence and Natural Language processing technology. He held R&D positions at Coopers & Lybrand, Cognitive Systems, Inc., Schlumberger, and Bell Laboratories.

 

Anatole studied Mathematics and Computer Science at Moscow State Pedagogical University and received his Ph.D. in Computer Science from Yale University in 1979.

 

 

Edy Liongosari is a Senior Researcher at Accenture Technology Labs leading research in knowledge modeling and semantic integration of disparate information sources. This work is currently focused on drug discovery in Pharmaceutical companies. This approach has been applied in several other domains and has received several patents and awards. A copy of this research work has also been placed into the Smithsonian's permanent research collection.

 

Edy joined Accenture Technology Labs in June 1989 after graduating from Indiana Unviersity at Bloomington, Indiana with a Master's degree in Computer Science. Some of his projects and publication can be found in accenture.com/techlabs/liongosari.