0)

http://www.cs.utexas.edu/users/UTCS/report/1994/profiles/bledsoe.html

Information extracted from URL : http://www.cs.utexas.edu/users/UTCS/report/1994/profiles/bledsoe.html START
 
 
 
 
1)

http://www.cs.utexas.edu/users/UTCS/report/1994/profiles/jwerth.html

Information extracted from URL : http://www.cs.utexas.edu/users/UTCS/report/1994/profiles/jwerth.html START

Find the information from the current URL:

S. I. Hyder, J. Werth, and J. C. Browne, "A unified model for concurrent debugging," in

Proceedings of the 1993 International Conference on Parallel Processing

, IEEE Computer Society, August 1993.

J. Werth, J. C. Browne, S. Sobek, T. J. Lee, P. Newton, and R. Jain, "The interaction of the formal and the practical in parallel programming environment development: CODE,"

Lecture Notes on Computer Science

, vol. 589, New York: Springer Verlag, 1992.

R. Jain, J. S. Werth, and J. C. Browne, "Scheduling parallel I/O operations in multiple bus systems,"

Journal of Parallel and Distributed Computing

, December 1992.

R. Jain, J. S. Werth, and J. C. Browne, "A general model for scheduling of parallel computations and its application to parallel I/O operations," in

Proceedings of 1991 International Conference on Parallel Processing

, August 1991.

J. S. Werth and L. H. Werth, "Directions in software engineering education," in

Proceedings of Thirteenth International Conference on Software Engineering

, May 1991.

Previous profile

Index

Next profile
 
 
 
 

2)

http://www.cs.utexas.edu/users/almstrum/welcome.html

Information extracted from URL : http://www.cs.utexas.edu/users/almstrum/welcome.html START
 
 
 
 
3)

http://www.cs.utexas.edu/users/boyer/

Information extracted from URL : http://www.cs.utexas.edu/users/boyer/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/boyer/publications.html
 
 
 
 

4)

http://www.cs.utexas.edu/users/browne/

Information extracted from URL : http://www.cs.utexas.edu/users/browne/ START

Find the information from the current URL:

J. C. Browne, S. I. Hyder, J. Dongarra, K. Moore, P. Newton,

"Visual Programming and Debugging for Parallel Computing,"

IEEE Parallel and Distributed Technology

, Spring 1995, Volume 3, Number 1, 1995.

Compares the visual parallel programming environments HeNCE and CODE 2. (21K)

J. C. Browne, S. I. Hyder, J. Dongarra, K. Moore, P. Newton,

"Visual Programming and Debugging for Parallel Computing"

, Technical Report TR94-229, Dept. of Computer Sciences, Univ. of Texas at Austin, 1994.

Compares the visual parallel programming environments HeNCE and CODE 2 (a longer version of the above paper, with more references). (138K)

J. C. Browne, J. S. Werth, et al., "Interaction of the formal and practical in the development of a parallel programming environment: the CODE parallel programming system," in

Proceedings of the Fourth Workshop on Languages and Compilers for Parallel Computing

, Santa Cruz, California, August 1991.

J. C. Browne, R. Jain, and J. S. Werth, "An experimental study of the effectiveness of high level parallel programming," in

Proceedings of the 5th SIAM Conference on Parallel Processing

, 1991.

J. C. Browne, D. P. Miranker, and C. M. Kuo, "Parallelizing compilation of rule-based programs," in

Proceedings of 1990 International Conference on Parallel Processing

, August 1990, pp. 247-251.

S. I. Hyder, J. Werth, and J. C. Browne, "A unified model for concurrent debugging," in

Proceedings of the 1993 International Conference on Parallel Processing

, IEEE Computer Society, August 1993.

M. Kleyn, J.C. Browne,

"A High Level Language for Specifying Graph-Based Languages and their Programming Environments"

,

15th International Conference on Software Engineering

, Baltimore MD, April, 1993.

The PostScript file is an extended version of the above paper.(88K)

P. Newton and J.C. Browne,

"The CODE 2.0 Graphical Parallel Programming Language"

,

Proc. ACM Int. Conf. on Supercomputing

, July, 1992.

This paper describes a prototype implementation of CODE 2. Some of the notations have changed, but the ideas are the same. This paper remains a good broad introduction to CODE because it is brief.

(91K)
 
 
 
 

5)

http://www.cs.utexas.edu/users/dahlin/

Information extracted from URL : http://www.cs.utexas.edu/users/dahlin/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/dahlin/papers.html

Information extracted from URL : http://www.cs.utexas.edu/users/dahlin/papers.html START

Find links which contain the Publication information:

http://www.cs.utexas.edupapers/sospAbstract.html

Find the information from the current URL:

"Active Naming: Flexible Location and Transport of Wide-area resources"

A. Vahdat, T. Anderson, M. Dahlin

pdf

ps

To appear: The 1999 USENIX Symposium on Internet Technologies and Systems (USITS99), October 1999.

"Hierarchical Cache Consistency in a WAN", J. Yin, L. Alvisi, M. Dahlin, C. Lin To appear: The 1999 USENIX Symposium on Internet Technologies and Systems (USITS99), October 1999.

"Interpreting Stale Load Information"

, M. Dahlin. The 19th IEEE International Conference on Distributed Computing Systems (ICDCS). May 1999.

ps

pdf

ICDCS talk

"Design Considerations for Distributed Caching on the Internet"

Renu Tewari, Michael Dahlin, Harrick Vin, and John Kay. The 19th IEEE International Conference on Distributed Computing Systems (ICDCS). May 1999.

ps

pdf

ICDCS talk

"Coordinated Placement and Replacement for Large-Scale Distributed Caches"

M. Korupolu and M. Dahlin. 1999 Workshop on Internet Applications. June 1999.

ps

pdf

Experimental Evaluation of QSM, a Simple Shared-Memory Model

Brian Grayson, Michael Dahlin, and Vijaya Ramachandran in Proceedings of the 1999 International Parallel Processing Symposium. April 1999.

ps

pdf

"Interpreting Stale Load Information"

, M. Dahlin. University of Texas at Austin Department of Computer Sciences Technical Report TR98-20. October 1998.

ps

pdf

"Volume Leases for Consistency in Large-Scale Systems"

, J. Yin, L. Alvisi, M. Dahlin, and C. Lin. IEEE Transactions on Knowledge and Data Engineering Special Issue on Web Technologies. Jan/Feb 1999.

ps

pdf

"Active Naming: Flexible Location and Transport of Wide-area resources"

A. Vahdat, T. Anderson, M. Dahlin

pdf

ps

Experimental Evaluation of QSM, a Simple Shared-Memory Model

Brian Grayson, Michael Dahlin, and Vijaya Ramachandran UTCS Technical Report TR98-21. November 1998.

ps

pdf

Emulations Between QSM, BSP and LogP: A Framework for General-Purpose Parallel Algorithm Design

V. Ramachandran, B. Grayson, and M. Dahlin. July 1998.

ps

pdf

extended abstract

to appear in the proceedings of SODA 1999.

"WebOS: Operating System Services For Wide Area Applications"

, Amin Vahdat, Tom Anderson, Mike Dahlin, Eshwar Belani, David Culler, Paul Eastham, and Chad Yoshikawa.

Seventh Symposium on High Performance Distributed Computing

July 1998.

"Using Leases to Support Server-Driven Consistency in Large-Scale Systems"

, J. Yin, L. Alvisi, M. Dahlin, and C. Lin. Proceedings of the 18th International Conference on Distributed Computing System. May 1998.

ps

pdf

Extended version (postscript)

"Beyond Hierarchies: Design Considerations for Distributed Caching on the Internet"

Renu Tewari, Michael Dahlin, Harrick Vin, and John Kay. UT CS Technical Report TR98-04. Feb 1998.

ps

pdf

older version

"The CRISIS Wide Area Security Architecture"

Eshwar Belani, Amin Vahdat, Thomas Anderson, and Michael Dahlin. To appear:

The Proceedings of the Seventh USENIX Security Symposium

January 1998.

"Support for data-intensive applications in large-scale systems"

NSF Workshop on New Challenges and Directions for Systems Research

St. Louis Missouri July 31-Aug1 1997.

"Experience with a Language for Writing Coherence Protocols"

(with T. Anderson, S. Chandra, J. Larus, B. Richards, and R. Wang),

Proceedings of the USENIX Conference on Domain-Specific Languages

October 1997.

"WebOS: Operating System Services For Wide Area Applications"

, Amin Vahdat, Paul Eastham, Chad Yoshikawa, Eshwar Belani, Thomas Anderson, David Culler, and Michael Dahlin. U.C. Berkeley Computer Science Division Tech Report. UCB CSD-97-938. March, 1997.

"WebOS: Software Support For Scalable Web Services",

Amin Vahdat, Michael Dahlin, Paul Eastham, Chad Yoshikawa, Thomas Anderson, and David Culler. January, 1997.

"WebFS: A Global File System for Fine-Grained Sharing"

A. Vahdat, M. Dahlin, P. Eastham, and T. Anderson

Works In Progress Session: OSDI'96

"Turning the Web Into a Computer,"

Amin Vahdat, Michael Dahlin, and Thomas Anderson. May, 1996.

pdf

"Serverless Network File Systems,"

M. Dahlin. PhD Thesis (December 1995).

"Serverless Network File Systems,"

T. Anderson, M. Dahlin, J. Neefe, D. Patterson, D. Roselli, and R. Wang.

TOCS

(February 1996). An earlier version of this paper was selected as an

Award Paper

SOSP

(December 1995).

Abstract of paper.

Postscript of paper.

SOSP'95 Talk.

"Cooperative Caching: Using Remote Client Memory to Improve File System Performance,"

M. Dahlin, R. Wang, T. Anderson, and D. Patterson. In

OSDI

(November 1994).

Abstract of paper.

Postscript of paper.

OSDI'94 Talk.

"A Quantitative Analysis of Cache Policies for Scalable Network File Systems,"

M. Dahlin, C. Mather, R. Wang, T. Anderson, and D. Patterson. SIGMETRICS (May 1994).

Abstract of paper.

Postscript of paper.

SIGMETRICS'94 Talk

"CRAM: A TURBOChannel Board for Fast Lossless Compression,"

M. Dahlin. Masters project report.

Abstract

Postscript of Masters Report

Information extracted from URL : http://www.cs.utexas.edu/users/less/Welcome.html START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/less/publications/cgi/bibByDate.cgi

Find the information from the current URL:

All papers

uFS papers

Beyond Browsers papers

Cilk papers

Lightweight Fault-Tolerance papers

WAFT papers

Language and Compiler papers

Information extracted from URL : http://www.cs.utexas.edu/users/less/bb START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/less/publications/cgi/bibSelectBB.cgi

Information extracted from URL : http://www.cs.utexas.edu/users/less/uFS START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/less/publications/cgi/bibSelectuFS.cgi

Find the information from the current URL:

Information extracted from URL : http://www.cs.utexas.edu/users/dahlin/techTrends START

Information extracted from URL : http://www.cs.utexas.edu/users/dahlin/cfp.html START

Information extracted from URL : http://www.cs.utexas.edu/users/dahlin/research.html START

Information extracted from URL : http://www.cs.rmit.edu.au/~jz/write/leone-how-to.html START

Find links which contain the Publication information:

http://www.cs.rmit.edu.au/~jz/write/levin-writing-papers.ps
 
 
 
 

6)

http://www.cs.utexas.edu/users/diz/

Information extracted from URL : http://www.cs.utexas.edu/users/diz/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/diz/pubs.html

Information extracted from URL : http://www.cs.utexas.edu/users/diz/pubs.html START

Information extracted from URL : http://www.cs.utexas.edu/users/UTCS/report/jan1999/zuckerman.html START

Find the information from the current URL:

A. Russell and D. Zuckerman, "Perfect information leader election in log* n + O(1) rounds," in the Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science, 1998, pp. 576­583.

L. J. Schulman and D. Zuckerman, "Asymptotically good codes correcting insertions, deletions and transpositions," in the Proceedings of the 8th ACM-SIAM Symposium on Discrete Algorithms, 1997, pp. 669­674.

D. Zuckerman, "Randomness-optimal oblivious sampling," Random Structures and Algorithms, no. 11, 1997, pp. 345­367.

N. Nisan and D. Zuckerman, "Randomness is linear in space," Journal of Computer and System Sciences, vol. 52, 1996, pp. 43­52. Special issue on STOC 1993.

A. Wigderson and D. Zuckerman, "Expanders that beat the eigenvalue bound: explicit construction and applications,'' Combinatorica, to appear.
 
 
 
 

7)

http://www.cs.utexas.edu/users/dsb/

Information extracted from URL : http://www.cs.utexas.edu/users/dsb/ START

Find the information from the current URL:

Batory, Chen, Robertson, and Wang, "

Design Wizards and Visual Programming Environments for GenVoca Generators

", to appear

IEEE Transactions on Software Engineering

.

Batory, Lofaso, and Smaragdakis

,

"

JTS: A Tool Suite for Building GenVoca Generators

".

5th International Conference in Software Reuse

, (ICSR'98), Victoria, Canada, June 1998.

Smaragdakis and Batory

,

 

Implementing Layered Designs with Mixin Layers.

12th European Conference on Object-Oriented Programming, (ECOOP '98), July 1998.

Lance Tokuda and Don Batory

, 

Automating Three Modes of Evolution for Object-Oriented Software Architectures

5th Conference on Object-Oriented Technologies, (COOTS '99), May 1999

.

Batory and Geraci, "

Composition Validation and Subjectivity in GenVoca Generators

",

IEEE Transactions on Software Engineering

, 1997.

Batory, Singhal, Sirkin, Thomas, "

Scalable Software Libraries

",

ACM SIGSOFT

1993.

Batory and O'Malley, "

The Design and Implementation of Hierarchical Software Systems with Reusable Components

",

ACM Transactions on Software Engineering and Methodology

1992.

Information extracted from URL : http://www.cs.utexas.edu/users/schwartz/JTS30Beta1.htm START

Find the information from the current URL:

Don Batory,

Product-Line Architectures

,

Invited presentation, Smalltalk und Java in Industrie and Ausbildung, Erfurt, Germany, October 1998.

Yannis Smaragdakis and Don Batory

,

 

Implementing Layered Designs with Mixin Layers

.

12th European Conference on Object-Oriented Programming, (ECOOP '98), July 1998.

Lance Tokuda and Don Batory

, 

Automating Three Modes of Evolution for Object-Oriented Software Architectures

,

5th Conference on Object-Oriented Technologies (COOTS'99), May 1999.

Lance Tokuda and Don Batory

, 

Evolving Object-Oriented Architectures with Refactorings

s

ubmitted for publication

.

Don Batory, Bernie Lofaso, and Yannis Smaragdakis, 

JTS: Tools for Implementing Domain-Specific Languages

. 

5th International Conference on Software Reuse, Victoria, Canada, June 1998.

Yannis Smaragdakis and Don Batory,

 

Implementing Reusable Object-Oriented Components

.

5th International Conference on Software Reuse, Victoria, Canada, June 1998.

Don Batory, Gang Chen, Eric Robertson, and Tao Wang, 

Web-Advertised Generators and Design Wizards

.

5th International Conference on Software Reuse, Victoria, Canada, June 1998

.
 
 
 
 

8)

http://www.cs.utexas.edu/users/emerson/

Information extracted from URL : http://www.cs.utexas.edu/users/emerson/ START
 
 
 
 
9)

http://www.cs.utexas.edu/users/fussell/

Information extracted from URL : http://www.cs.utexas.edu/users/fussell/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/fussell/journal.html
 
 
 
 

10)

http://www.cs.utexas.edu/users/gouda/

Information extracted from URL : http://www.cs.utexas.edu/users/gouda/ START

Find the information from the current URL:

Citation

Gouda, M. G., 1996.

Network Protocols Between Exact Specifications and Pragmatic Implementations

,

Computing Surveys

,

28A

(4), December,

http://www.acm.org/surveys/1996/GoudaNetwork/

Submission date

June 14, 1996

Revision date

(if any)

October 15, 1996

Acceptance date

October 31, 1996

HTML

(if available)
 
 
 
 

11)

http://www.cs.utexas.edu/users/kincaid/

Information extracted from URL : http://www.cs.utexas.edu/users/kincaid/ START

Find the information from the current URL:

Jen-Yuan Chen, David R. Kincaid, and David M. Young, "GGMRES Iterative Method" and "MGMRES Iterative Method" in

Iterative Methods in Scientific Computation

(Junping Wang, Myron B. Allen III, Benito M. Chen, and Tarek Mathew, eds.), IMACS, New Brunswich, NJ, pp.15-26, 1998.

David R. Kincaid, "Stationary Second-Degree Iterative Methods" in

Applied Numerical Mathematics

, Vol. 16, pp. 227-237, 1994.

David R. Kincaid and David M. Young, "Note on Parallel Alternating-type Iterative Methods" in

Iterative Methods in Linear Algebra II

(S. D. Margenov and P. S. Vassilevski, eds.), IMACS, New Brunswich, NJ, 1996.

Thomas C. Oppe and David R. Kincaid, "

Iterative BLAS," Journal of Applied Science & Computations

, Vol. 1, No. 3, pp. 494-520, February 1995.

David M. Young and David R. Kincaid, "Parallel Implementation of a Class of Nonstationary Alternating-Type Methods" in

Proceedings of the Third International Colloquium on Numerical Analysis

(D. Bainov and V. Covachev, eds.), VSP, Utrecht, The Netherlands, pp. 219-222, 1995.

David M. Young and David R. Kincaid, "A New Class of Parallel Alternating-Type Iterative Methods,&quot Journal of Computational and Applied Mathematics, Vol. 74, pp. 331-344, 1996.
 
 
 
 

12)

http://www.cs.utexas.edu/users/kuipers/

Information extracted from URL : http://www.cs.utexas.edu/users/kuipers/ START

Information extracted from URL : http://www.cs.utexas.edu/users/qr/QR-book.html START
 
 
 
 

13)

http://www.cs.utexas.edu/users/lam/

Information extracted from URL : http://www.cs.utexas.edu/users/lam/ START
 
 
 
 
14)

http://www.cs.utexas.edu/users/lin/

Information extracted from URL : http://www.cs.utexas.edu/users/lin/ START

Find links which contain the Publication information:

http://www.cs.utexas.edupapers/dsl99.ps

Find the information from the current URL:

An Annotation Language for Optimizing Software Libraries, with S. Guyer.

2nd Conference on Domain Specific Languages,

(to appear) October, 1999.

Volume Leases for Consistency in Large-Scale Systems, with J. Yin, L. Alvisi, and M. Dahlin.

IEEE Transactions on Knowledge and Data Engineering,

(to appear).

The Case for High Level Parallel Programming in ZPL, with B. Chamberlain, S. Choi, E. Lewis, L. Snyder, and W. Weathersby.

IEEE Computational Science and Engineering,

5 (3), July-Sept. 1998, pp. 76-86.

The Implementation and Evaluation of Fusion and Contraction in Array Languages, with E. Lewis and L. Snyder.

ACM SIGPLAN Conference on Programming Language Design and Implementation

, June 1998.

A Flexible Class of Parallel Matrix Multiplication Algorithms, with J. Gunnels, G. Morrow and R. van de Geijn.

12th International Parallel Processing Symposium and 9th Symposium on Parallel and Distributed Processing

, March 1998.

Abstractions for Portable, Scalable Parallel Programming, with G. Alverson, W. Griswold, D. Notkin and L. Snyder.

IEEE Transactions on Parallel and Distributed Systems

vol. 9, no. 1, 1998, pp. 1--17.

Information extracted from URL : http://www.cs.washington.edu/research/zpl/ START

Find links which contain the Publication information:

http://www.cs.washington.edu/research/zpl/papers/papers.html
 
 
 
 

15)

http://www.cs.utexas.edu/users/lorenzo/

Information extracted from URL : http://www.cs.utexas.edu/users/lorenzo/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/lorenzo/publications.html

Information extracted from URL : http://www.cs.utexas.edu/users/lorenzo/lft.html START

Find the information from the current URL:

The Cost of Recovery in Message Logging Protocols.

(with S. Rao and H.Vin).

Proceedings of the 17th International Symposium on Reliable Distributed Systems.

West Lafayette, Indiana, October 1998.

The Relative Overhead of Piggybacking in Causal Message-Logging Protocols.

. (with K. Bhatia and K. Marzullo)

Proceedings of the Workshop on Advances in Parallel and Distributed Systems (APADS)

October 20, 1998, Purdue University, West Lafayette, Indiana.

Hybrid Message-Logging Protocols for Fast Recovery.

(with S. Rao and H. Vin)

Digest of FastAbstracts. The 28th International Symposium on Fault-Tolerant Computing.

Munich, Germany, June 1998, pp. 41-42.

Low-Overhead Protocols for Fault-Tolerant File Sharing.

(with S. Rao and H. Vin)

Proceedings of the 18th IEEE International Conference on Distributed Computing Systems.

Amsterdam, The Netherlands, May 1998, pp. 452-461.

Fault-Tolerance: Java's Missing Buzzword.

Invited Paper.

Proceedings of the 7th Heterogeneous Computing Workshop (HCW '98)

, Orlando, Florida, March 1998, pp. 156-158.

Message Logging: Pessimistic, Optimistic, Causal and Optimal.

(with K. Marzullo)

IEEE Transactions on Software Engineering

. 24:2, February 1998, pp. 149-159.

Trade-Offs in Implementing Causal Message Logging Protocols.

(with K. Marzullo)

Proceedings of the 15th ACM Annual Symposium on the Priciples of Distributed Computing.

Philadelphia, May 1996, pp. 58-67.

Message Logging: Pessimistic, Optimistic, and Causal.

(with K. Marzullo)

Proceedings of the 15th IEEE International Conference on Distributed Computing Systems.

Vancouver, Canada, June 1995, pp. 229-236.

Deriving optimal checkpoint protocols for distributed shared memory architectures.

(with K. Marzullo)

Selected Papers, International Workshop in Theory and Practice in Distributed Systems

, K. Birman, F. Mattern and A. Schiper editors, Springer-Verlag 1995, pp. 111-120.

Nonblocking and Orphan-Free Message Logging Protocols.

(with B. Hoppe and K. Marzullo)

Proceedings of the 23rd International Symposium on Fault Tolerant Computing

, Toulouse, France, June 1993 pp. 145-154.

(with K. Marzullo)

Proceedings of the 15th ACM Annual Symposium on the Priciples of Distributed Computing.

Philadelphia, May 1996, pp. 58-67.

(with K. Marzullo)

Proceedings of the 15th IEEE International Conference on Distributed Computing Systems.

Vancouver, Canada, June 1995, pp. 229-236.

(with K. Marzullo)

Selected Papers, International Workshop in Theory and Practice in Distributed Systems

, K. Birman, F. Mattern and A. Schiper editors, Springer-Verlag 1995, pp. 111-120.

(with B. Hoppe and K. Marzullo)

Proceedings of the 23rd International Symposium on Fault Tolerant Computing

, Toulouse, France, June 1993 pp. 145-154.

Information extracted from URL : http://www-cse.ucsd.edu/users/marzullo/WAFT/index.html START

Information extracted from URL : http://www.cs.ucsd.edu/users/marzullo START
 
 
 
 

16)

http://www.cs.utexas.edu/users/lwerth/

Information extracted from URL : http://www.cs.utexas.edu/users/lwerth/ START

Find the information from the current URL:

L. H. Werth. "Getting Started in Computer Ethics"

Proceedings of the Twenty-eighth Symposium on Computer Science Education

, Feb. 1997.

L. H. Werth. "Integrating Ethics into a Software Engineering Class"

Software Engineering Education

, N. Mead (ed.) Springer-Verlag, 1996.

L. H. Werth. "Software Process Works for Students:

Proceedings of the Twenty-sixth Symposium on Computer Science Education

, Mar. 1995.

L. H. Werth. "Software Process for Software Engineering Projects" Proceedings of Focus in (Engineering) Education (FIE). Nov, 1995.

L. H. Werth. "An Adventure in Software Process"

Software Engineering Education

, J. Diaz-Herrera (ed.) Springer-Verlag, 1994.

L. H. Werth. "Lecture notes on software process improvement,"

CMU/SEI-93-EM-8

, Feb. 1993.

L. H. Werth, "Quality assurance for a software engineering project,"

IEEE Transactions on Education

, January 1993.

L. H. Werth. "Collaboration with Industry to Provide CASE Tools for Software Engineering Classes." Software Engineering Education, J. Tomako (ed.) Springer-Verlag, 1991.

L. H. Werth. "Industrial-strength CASE tools for software engineering classes," in

Software Engineering Education

, J. Tomayko, Eds. Springer-Verlag, 1991.

L. H. Werth and John S. Werth. "Directions in software engineering education," in

Proceedings from Workshop on Directions in Software Engineering

(ICSE), May 1991.

L. H. Werth. "Object-oriented programming on the Macintosh,"

Journal of Object-Oriented Programming

, Nov.-Dec. 1990.

Other Useful Links:

University of Texas

Computer Science Department

Home Page

Faculty Profiles

CS Classes

Last Update: August 13, 1995

Information extracted from URL : http://ricis.cl.uh.edu/virt-lib/se_programs.html START
 
 
 
 

17)

http://www.cs.utexas.edu/users/miranker/

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/ START

This URL is a frame structure

Information Extracted from frame URL : http://www.cs.utexas.edu/users/miranker/frbanner.htm START

Information Extracted from frame URL : http://www.cs.utexas.edu/users/miranker/frbanner.htm END

Information Extracted from frame URL : http://www.cs.utexas.edu/users/miranker/frconten.htm START

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/index.html START

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/index.html END

Information Extracted from frame URL : http://www.cs.utexas.edu/users/miranker/frconten.htm END

Information Extracted from frame URL : http://www.cs.utexas.edu/users/miranker/frmain.htm START

Information extracted from URL : http://www.cs.utexas.edu/users/phoebe/cs386/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/phoebe/cs386/../miranker/386/web-db-papers.html

Information extracted from URL : http://www.cs.utexas.edu/users/phoebe/cs386/ END

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/hw6.html START

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/hw6.html END

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/tp-milestone1.html START

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/tp-milestone1.html END

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/web-db-papers.html START

Find links which contain the Publication information:

http://www.cs.washington.edu/homes/alon/webdb.ps

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/386/web-db-papers.html END

Information Extracted from frame URL : http://www.cs.utexas.edu/users/miranker/frmain.htm END

Information extracted from URL : http://www.cs.utexas.edu/users/miranker/ END
 
 
 
 

18)

http://www.cs.utexas.edu/users/misra/

Information extracted from URL : http://www.cs.utexas.edu/users/misra/ START

Find the information from the current URL:

Modular Multiprogramming , Formal Methods in System Design, 1999 (to appear).

An Object Model for Multiprogramming, Proc. 10th. IPPS/SPDP 98 Workshops, Jose Rolim (ed.),

Lecture Notes in Computer Sciences, Springer-Verlag, Vol. 1388, pp. 881-889, 1998.

(with Al Carruth)

Proof of a Real-Time Mutual-Exclusion Algorithm, Parallel Processing Letters,

Vol 6, No. 2, pp 251-257, 1996.

A logic for Concurrent Programming (in two parts): Safety and Progress, Journal

of Computer and Software Engineering, Vol.3, No.2, pp 239-300, 1995.

Powerlist: A Structure for Parallel Recursion, Vol. 16, No. 6, pp. 1737-1767, November 1994.

Loosely Coupled Processes, Future Generations Computer Systems 8, 269-286, North-Holland, 1992.

Equational Reasoning About Nondeterministic Processes, Formal Aspects of

Computing, 2:2, 167-195, 1990.

Specifying Concurrent Objects as Communicating Processes, Science of Computer Programming 14,

 159-184, 1990.

Parallel Program Design: A Foundation, K. Mani Chandy and Jayadev Misra,

Addison-Wesley, 1988.

(with K. Mani Chandy) Systolic Algorithms as Programs, Distributed Computing, 1:177-183, 1986.

Axioms for Memory Access in Asynchronous Hardware, ACM Transactions on

Programming Languages and Systems, Vol. 8, 1:142-153, 1986.

Distributed Discrete Event Simulation, Computing Surveys, vol.18, No.1, pp 39-65, March 1986.

(with K. M. Chandy) How Processes Learn, Journal of Distributed Computing, 1:40-52, 1986.

(with K. M. Chandy) Proofs of Networks of Processes, IEEE Vol. SE-7, No. 4, pp. 417-426, July 1981.

Other Publications Available Online

 (with Rajeev Joshi) 

Maximally Concurrent Programs

(with Markus Kaltenbach)

Using Design Knowledge to Model-Check Progress Properties of Programs.

A Foundation of Parallel Programming, Proc. 9th International Summer School on Constructive Methods in Computer Science, Marktoberdorf, Germany, July 24-August 5, 1988, in NATO ASI Series, Vol. F 55, ed. Manfred Broy, Springer-Verlag, pp. 397--433, 1989.

Specification Structuring, Proc. Belgian FNRS, International Chair of Computer Science, Louvain-la-Neuve, March 18-23, 1990.

 (with Rajeev Joshi) 

On the Impossibility of Robust Solutions for Fair Resource allocation,

Phase Synchronization

A Simple Proof of a Simple Consensus Algorithm

(with K. Mani Chandy)

Proofs of Distributed Algorithms:  An Exercise

(with David Gries)

A Constructive Proof of Vizing's Theorem

UNITY:

 

Notes on UNITY

   

New UNITY

Seuss: 

 

Overview of Seuss 

    

A Discipline of Multiprogramming 

:A research manuscript.

Research Group

My research group, the

PSP group

, has a home page, with more information about my work and electronic access to other papers.

Courses that I Teach

Here are links to some of the recent courses I have taught:

Distributed Computing

(CS 380D) taught in Spring 1998

.

Distributed Computing (CS 380D) taught in Spring 1999.

Advanced  Distributed Computing

(CS 390D) to be taught in Fall 1999.

 
 
 
 
 

19)

http://www.cs.utexas.edu/users/mooney/

Information extracted from URL : http://www.cs.utexas.edu/users/mooney/ START

Information extracted from URL : http://www.cs.utexas.edu/users/ml/nl.html START

Information extracted from URL : http://www.cs.utexas.edu/users/ml/recommender.html START

Find the information from the current URL:

Using HTML Structure and Linked Pages to Improve Learning for Text Categorization

Michael B. Cline

Undergraduate Honors Thesis, Department of Computer Sciences, University of Texas at Austin, May 1999.

Classifying web pages is an important task in automating the organization of information on the WWW, and learning for text categorization can help automate the development of such systems. This project explores using two aspects of HTML to improve learning for text categorization: 1) Using HTML tags such as titles, links, and headings to partition the text on a page and 2) Using the pages linked from a given page to augment its description. Initial experimental results on 26 categories from the Yahoo hierarchy demonstrate the promise of these two methods for improving the accuracy of a bag-of-words text classifier using a simple Bayesian learning algorithm.

Content-Based Book Recommending Using Learning for Text Categorization

Raymond J. Mooney and Loriene Roy

Submitted to the SIGIR-99 Workshop on Recommender Systems: Algorithms and Evaluation, May 1999

Recommender systems improve access to relevant products and information by making personalized suggestions based on previous examples of a user's likes and dislikes. Most existing recommender systems use social filtering methods that base recommendations on other users' preferences. By contrast, content-based methods use information about an item itself to make suggestions. This approach has the advantage of being able to recommended previously unrated items to users with unique interests and to provide explanations for its recommendations. We describe a content-based book recommending system that utilizes information extraction and a machine-learning algorithm for text categorization. Initial experimental results demonstrate that this approach can produce accurate recommendations. These experiments are based on ratings from random samplings of items and we discuss problems with previous experiments that employ skewed samples of user-selected examples to evaluate performance.

Book Recommending Using Text Categorization with Extracted Information

Raymond J. Mooney, Paul N. Bennett and Loriene Roy

Appears in the Papers of the AAAI-98/ICML-98 Workshop on Learning for Text Categorization and the Papers of the AAAI-98 Workshop on Recommender Systems, Madison, WI, July 1998.

Content-based recommender systems suggest documents, items, and services to users based on learning a profile of the user from rated examples containing information about the given items. Text categorization methods are very useful for this task but generally rely on unstructured text. We have developed a book-recommending system that utilizes semi-structured information about items gathered from the web using simple information extraction techniques. Initial experimental results demonstrate that this approach can produce fairly accurate recommendations.

Text Categorization Through Probabilistic Learning: Applications to Recommender Systems

Paul N. Bennett

Undergraduate Honors Thesis, Department of Computer Sciences, University of Texas at Austin, May 1998.

Also appears as AI Laboratory Technical Report AI98-270

With the growth of the World Wide Web, recommender systems have received an increasing amount of attention. Many recommender systems in use today are based on collaborative filtering. This project has focused on LIBRA, a content-based book recommending system. By utilizing text categorization methods and the information available for each book, the system determines a user profile which is used as the basis of recommendations made to the user. Instead of the bag-of-words approach used in many other statistical text categorization approaches, LIBRA parses each text sample into a semi-structured representation. We have used standard Machine Learning techniques to analyze the performance of several algorithms on this learning task. In addition, we analyze the utility of several methods of feature construction and selection (i.e. methods of choosing the representation of an item that the learning algorithm actually uses). After analyzing the system we conclude that good recommendations are produced after a relatively small number of training examples. We also conclude that the feature selection method tested does not improve the performance of these algorithms in any systematic way, though the results indicate other feature selection methods may prove useful. Feature construction, however, while not providing a large increase in performance with the particular construction methods used here, holds promise of providing performance improvements for the algorithms investigated. This text assumes only minor familiarity with concepts of artificial intelligence and should be readable by the upper division computer science undergraduate familiar with basic concepts of probability theory and set theory.

Information extracted from URL : http://www.cs.utexas.edu/users/ml/ilp.html START

Find the information from the current URL:

Relational Learning of Pattern-Match Rules for Information Extraction

Mary Elaine Califf and Raymond J. Mooney

To appear in

Proceedings of the Sixteenth National Conference on Aritificial Intelligence

, Orlando, FL, July, 1999. (AAAI-99)

Information extraction is a form of shallow text processing that locates a specified set of relevant items in a natural-language document. Systems for this task require significant domain-specific knowledge and are time-consuming and difficult to build by hand, making them a good application for machine learning. This paper presents a system, Rapier, that takes pairs of sample documents and filled templates and induces pattern-match rules that directly extract fillers for the slots in the template. Rapier employs a bottom-up learning algorithm which incorporates techniques from several inductive logic programming systems and acquires unbounded patterns that include constraints on the words, part-of-speech tags, and semantic classes present in the filler and the surrounding text. We present encouraging experimental results on two domains.

Relational Learning Techniques for Natural Language Information Extraction

Mary Elaine Califf

Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, August 1998.

132 pages.

Also appears as Artificial Intelligence Laboratory Technical Report AI98-276.

The recent growth of online information available in the form of natural language documents creates a greater need for computing systems with the ability to process those documents to simplify access to the information. One type of processing appropriate for many tasks is information extraction, a type of text skimming that retrieves specific types of information from text. Although information extraction systems have existed for two decades, these systems have generally been built by hand and contain domain specific information, making them difficult to port to other domains. A few researchers have begun to apply machine learning to information extraction tasks, but most of this work has involved applying learning to pieces of a much larger system. This dissertation presents a novel rule representation specific to natural language and a relational learning system, Rapier, which learns information extraction rules. Rapier takes pairs of documents and filled templates indicating the information to be extracted and learns pattern-matching rules to extract fillers for the slots in the template. The system is tested on several domains, showing its ability to learn rules for different tasks. Rapier's performance is compared to a propositional learning system for information extraction, demonstrating the superiority of relational learning for some information extraction tasks. Because one difficulty in using machine learning to develop natural language processing systems is the necessity of providing annotated examples to supervised learning systems, this dissertation also describes an attempt to reduce the number of examples Rapier requires by employing a form of active learning. Experimental results show that the number of examples required to achieve a given level of performance can be significantly reduced by this method.

An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions

Lappoon R. Tang, Mary Elaine Califf, Raymond J. Mooney

TR AI98-271, Artificial Intelligence Lab, University of Texas at Austin, May 1998.

This paper experimentally compares three approaches to program induction: inductive logic programming (ILP), genetic programming (GP), and genetic logic programming (GLP) (a variant of GP for inducing Prolog programs). Each of these methods was used to induce four simple, recursive, list-manipulation functions. The results indicate that ILP is the most likely to induce a correct program from small sets of random examples, while GP is generally less accurate. GLP performs the worst, and is rarely able to induce a correct program. Interpretations of these results in terms of differences in search methods and inductive biases are presented.

Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming

Mary Elaine Califf and Raymond J. Mooney

New Generation Computing

, 16, 3, p. 263-281 (1998).

This paper demonstrates the capabilities of FOIDL, an inductive logic programming (ILP) system whose distinguishing characteristics are the ability to produce first-order decision lists, the use of an output completeness assumption as a substitute for negative examples, and the use of intensional background knowledge. The development of FOIDL was originally motivated by the problem of learning to generate the past tense of English verbs; however, this paper demonstrates its superior performance on two different sets of benchmark ILP problems. Tests on the finite element mesh design problem show that FOIDL's decision lists enable it to produce generally more accurate results than a range of methods previously applied to this problem. Tests with a selection of list-processing problems from Bratko's introductory Prolog text demonstrate that the combination of implicit negatives and intensionality allow FOIDL to learn correct programs from far fewer examples than FOIL.

Using Multi-Strategy Learning to Improve Planning Efficiency and Quality

Tara A. Estlin

Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, May 1998.

117 pages

Also appears as AI Laboratory Technical Report AI98-269

Artificial intelligence planning systems have become an important tool for automating a wide variety of tasks. However, even the most current planning algorithms suffer from two major problems. First, they often require infeasible amounts of computation time to solve problems in most domains. And second, they are not guaranteed to return the best solution to a planning problem, and in fact can sometimes return very low-quality solutions. One way to address these problems is to provide a planning system with domain-specific control knowledge, which helps guide the planner towards more promising search paths. Machine learning techniques enable a planning system to automatically acquire search-control knowledge for different applications. A considerable amount of planning and learning research has been devoted to acquiring rules that improve planning efficiency, also known as speedup learning. Much less work has been down in learning knowledge to improve the quality of plans, even though this is an essential feature for many real-world planning systems. Furthermore, even less research has been done in acquiring control knowledge to improve both these metrics.

The learning system presented in this dissertation, called SCOPE, is a unique approach to learning control knowledge for planning. SCOPE learns domain-specific control rules for a planner that improve both planning efficiency and plan quality, and it is one of the few systems that can learn control knowledge for partial-order planning. SCOPE's architecture integrates explanation-based learning (EBL) with techniques from inductive logic programming. Specifically, EBL is used to constrain an inductive search for control heuristics that help a planner choose between competing plan refinements. Since SCOPE uses a very flexible training approach, its learning algorithm can be easily focused to prefer search paths that are better for particular evaluation metrics. SCOPE is extensively tested on several planning domains, including a logistics transportation domain and a production manufacturing domain. In these tests, it is shown to significantly improve both planning efficiency and quality and is shown to be more robust than a competing approach.

Learning to Parse Natural Language Database Queries into Logical Form

Cynthia A. Thompson, Raymond J. Mooney, and Lappoon R. Tang

Proceedings of the ML-97 Workshop on Automata Induction, Grammatical Inference, and Language Acquisition

.

For most natural language processing tasks, a parser that maps sentences into a semantic representation is significantly more useful than a grammar or automata that simply recognizes syntactically well-formed strings. This paper reviews our work on using inductive logic programming methods to learn deterministic shift-reduce parsers that translate natural language into a semantic representation. We focus on the task of mapping database queries directly into executable logical form. An overview of the system is presented followed by recent experimental results on corpora of Spanish geography queries and English job-search queries.

Learning to Improve both Efficiency and Quality of Planning

Tara A. Estlin and Raymond J. Mooney

Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence (IJCAI-97)

Nagoya, Japan, pp. 1227-1232, August, 1997.

Most research in learning for planning has concentrated on efficiency gains. Another important goal is improving the quality of final plans. Learning to improve plan quality has been examined by a few researchers, however, little research has been done learning to improve both efficiency and quality. This paper explores this problem by using the SCOPE learning system to acquire control knowledge that improves on both of these metrics. Since SCOPE uses a very flexible training approach, we can easily focus it's learning algorithm to prefer search paths that are better for particular evaluation metrics. Experimental results show that SCOPE can significantly improve both the quality of final plans and overall planning efficiency.

An Inductive Logic Programming Method for Corpus-based Parser Construction

John M. Zelle and Raymond J. Mooney

Unpublished technical note, January 1997

Empirical methods for building natural language systems has become an important area of research in recent years. Most current approaches are based on propositional learning algorithms and have been applied to the problem of acquiring broad-coverage parsers for relatively shallow (syntactic) representations. This paper outlines an alternative empirical approach based on techniques from a subfield of machine learning known as Inductive Logic Programming (ILP). ILP algorithms, which learn relational (first-order) rules, are used in a parser acquisition system called CHILL that learns rules to control the behavior of a traditional shift-reduce parser. Using this approach, CHILL is able to learn parsers for a variety of different types of analyses, from traditional syntax trees to more meaning-oriented case-role and database query forms. Experimental evidence shows that CHILL performs comparably to propositional learning systems on similar tasks, and is able to go beyond the broad-but-shallow paradigm and learn mappings directly from sentences into useful semantic representations. In a complete database-query application, parsers learned by CHILL outperform an existing hand-crafted system, demonstrating the promise of empricial techniques for automating the construction certain NLP systems.

Relational Learning Techniques for Natural Language Information Extraction

Mary Elaine Califf

Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin, 1997.

The recent growth of online information available in the form of natural language documents creates a greater need for computing systems with the ability to process those documents to simplify access to the information. One type of processing appropriate for many tasks is information extraction, a type of text skimming that retrieves specific types of information from text. Although information extraction systems have existed for two decades, these systems have generally been built by hand and contain domain specific information, making them difficult to port to other domains. A few researchers have begun to apply machine learning to information extraction tasks, but most of this work has involved applying learning to pieces of a much larger system. This paper presents a novel rule representation specific to natural language and a learning system, RAPIER, which learns information extraction rules. RAPIER takes pairs of documents and filled templates indicating the information to be extracted and learns patterns to extract fillers for the slots in the template. This proposal presents initial results on a small corpus of computer-related job postings with a preliminary version of RAPIER. Future research will involve several enhancements to RAPIER as well as more thorough testing on several domains and extension to additional natural language processing tasks. We intend to extend the rule representation and algorithm to allow for more types of constraints than are currently supported. We also plan to incorporate active learning, or sample selection, methods, specifically query by committee, into RAPIER. These methods have the potential to substantially reduce the amount of annotation required. We will explore the issue of distinguishing relevant and irrelevant messages, since currently RAPIER only extracts from the any messages given to it, assuming that all are relevant. We also intend to run much larger tests with RAPIER on multiple domains including the terrorism domain from the third and fourth Message Uncderstanding Conferences, which will allow comparison against other systems. Finally, we plan to demonstrate the generality of RAPIER`s representation and algorithm by applying it to other natural language processing tasks such as word sense disambiguation.

Relational Learning of Pattern-Match Rules for Information Extraction

Mary Elaine Califf and Raymond J. Mooney

Proceedings of the ACL Workshop on Natural Language Learning

, pp. 9-15, Madrid, Spain, July, 1997.

Information extraction systems process natural language documents and locate a specific set of relevant items. Given the recent success of empirical or corpus-based approaches in other areas of natural language processing, machine learning has the potential to significantly aid the development of these knowledge-intensive systems. This paper presents a system, RAPIER, that takes pairs of documents and filled templates and induces pattern-match rules that directly extract fillers for the slots in the template. The learning algorithm incorporates techniques from several inductive logic programming systems and learns unbounded patterns that include constraints on the words and part-of-speech tags surrounding the filler. Encouraging results are presented on learning to extract information from computer job postings from the newsgroup misc.jobs.offered.

Applying ILP-based Techniques to Natural Language Information Extraction: An Experiment in Relational Learning

Mary Elaine Califf and Raymond J. Mooney

Workshop Notes of the IJCAI-97 Workshop on Frontiers of Inductive Logic Programming

, pp. 7-11, Nagoya, Japan, August, 1997.

Information extraction systems process natural language documents and locate a specific set of relevant items. Given the recent success of empirical or corpus-based approaches in other areas of natural language processing, machine learning has the potential to significantly aid the development of these knowledge-intensive systems. This paper presents a system, RAPIER, that takes pairs of documents and filled templates and induces pattern-match rules that directly extract fillers for the slots in the template. The learning algorithm incorporates techniques from several inductive logic programming systems and learns unbounded patterns that include constraints on the words and part-of-speech tags surrounding the filler. Encouraging results are presented on learning to extract information from computer job postings from the newsgroup misc.jobs.offered.

Inductive Logic Programming for Natural Language Processing

Raymond J. Mooney

Inductive Logic Programming: Selected Papers from the 6th International Workshop

, S. Muggleton (Ed.), pp.3-22, Springer Verlag, Berlin, 1997.

Proceedings of the 6th International Inductive Logic Programming Workshop

, pp. 205-224, Stockholm, Sweden, August 1996.

This paper reviews our recent work on applying inductive logic programming to the construction of natural language processing systems. We have developed a system, CHILL, that learns a parser from a training corpus of parsed sentences by inducing heuristics that control an initial overly-general shift-reduce parser. CHILL learns syntactic parsers as well as ones that translate English database queries directly into executable logical form. The ATIS corpus of airline information queries was used to test the acquisition of syntactic parsers, and CHILL performed competitively with recent statistical methods. English queries to a small database on U.S. geography were used to test the acquisition of a complete natural language interface, and the parser that CHILL acquired was more accurate than an existing hand-coded system. The paper also includes a discussion of several issues this work has raised regarding the capabilities and testing of ILP systems as well as a summary of our current research directions.

Integrating Explanation-Based and Inductive Learning Techniques to Acquire Search-Control for Planning

Tara A. Estlin

Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin, 1996. (Technical Report AI96-250)

Planning systems have become an important tool for automating a wide variety of tasks. Control knowledge guides a planner to find solutions quickly and is crucial for efficient planning in most domains. Machine learning techniques enable a planning system to automatically acquire domain-specific search-control knowledge for different applications. Past approaches to learning control information have usually employed explanation-based learning (EBL) to generate control rules. Unfortunately, EBL alone often produces overly complex rules that actually decrease rather than improve overall planning efficiency. This paper presents a novel learning approach for control knowledge acquisition that integrates explanation-based learning with techniques from inductive logic programming. In our learning system SCOPE, EBL is used to constrain an inductive search for control heuristics that help a planner choose between competing plan refinements. SCOPE is one of the few systems to address learning control information for newer, partial-order planners. Specifically, this proposal describes how SCOPE learns domain-specific control rules for the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains, and to be more effective than a pure EBL approach.

Future research will be performed in three main areas. First, SCOPE's learning algorithm will be extended to include additional techniques such as constructive induction and rule utility analysis. Second, SCOPE will be more thoroughly tested; several real-world planning domains have been identified as possible testbeds, and more in-depth comparisons will be drawn between SCOPE and other competing approaches. Third, SCOPE will be implemented in a different planning system in order to test its portability to other planning algorithms. This work should demonstrate that machine-learning techniques can be a powerful tool in the quest for tractable real-world planning.

Integrating EBL and ILP to Acquire Control Rules for Planning

Tara A. Estlin and Raymond J. Mooney

Proceedings of the Third International Workshop on Multi-Strategy Learning

, pp. 271-279, Harpers Ferry, WV, May 1996. (MSL-96).

Most approaches to learning control information in planning systems use explanation-based learning to generate control rules. Unfortunately, EBL alone often produces overly complex rules that actually decrease planning efficiency. This paper presents a novel learning approach for control knowledge acquisition that integrates explanation-based learning with techniques from inductive logic programming. EBL is used to constrain an inductive search for selection heuristics that help a planner choose between competing plan refinements. SCOPE is one of the few systems to address learning control information in the newer partial-order planners. Specifically, SCOPE learns domain-specific control rules for a version of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains.

Learning to Parse Database Queries using Inductive Logic Programming

John M. Zelle and Raymond J. Mooney

Proceedings of the Thirteenth National Conference on Aritificial Intelligence

, pp. 1050-1055, Portland, OR, August, 1996. (AAAI-96)

This paper presents recent work using the CHILL parser acquisition system to automate the construction of a natural-language interface for database queries. CHILL treats parser acquisition as the learning of search-control rules within a logic program representing a shift-reduce parser and uses techniques from Inductive Logic Programming to learn relational control knowledge. Starting with a general framework for constructing a suitable logical form, CHILL is able to train on a corpus comprising sentences paired with database queries and induce parsers that map subsequent sentences directly into executable queries. Experimental results with a complete database-query application for U.S. geography show that CHILL is able to learn parsers that outperform a pre-existing, hand-crafted counterpart. These results demonstrate the ability of a corpus-based system to produce more than purely syntactic representations. They also provide direct evidence of the utility of an empirical approach at the level of a complete natural language application.

Multi-Strategy Learning of Search Control for Partial-Order Planning

Tara A. Estlin and Raymond J. Mooney

Proceedings of the Thirteenth National Conference on Aritificial Intelligence

, pp. 843-848, Portland, OR, August, 1996. (AAAI-96)

Most research in planning and learning has involved linear, state-based planners. This paper presents SCOPE, a system for learning search-control rules that improve the performance of a partial-order planner. SCOPE integrates explanation-based and inductive learning techniques to acquire control rules for a partial-order planner. Learned rules are in the form of selection heuristics that help the planner choose between competing plan refinements. Specifically, SCOPE learns domain-specific control rules for a version of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains.

Advantages of Decision Lists and Implicit Negative in Inductive Logic Programming

Mary Elaine Califf and Raymond J. Mooney

Technical Report, Artificial Intelligence Lab, University of Texas at Austin, 1996.

This paper demonstrates the capabilities of FOIDL, an inductive logic programming (ILP) system whose distinguishing characteristics are the ability to produce first-order decision lists, the use of an output completeness assumption to provide implicit negative examples, and the use of intensional background knowledge. The development of FOIDL was originally motivated by the problem of learning to generate the past tense of English verbs; however, this paper demonstrates its superior performance on two different sets of benchmark ILP problems. Tests on the finite element mesh design problem show that FOIDL's decision lists enable it to produce better results than all other ILP systems whose results on this problem have been reported. Tests with a selection of list-processing problems from Bratko's introductory Prolog text demonstrate that the combination of implicit negatives and intensionality allow FOIDL to learn correct programs from far fewer examples than FOIL.

Comparative Results on Using Inductive Logic Programming for Corpus-based Parser Construction

John M. Zelle and Raymond J. Mooney

Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing

, S. Wermter, E. Riloff and G. Scheler, Eds, Spring Verlag, 1995.

This paper presents results from recent experimenets with CHILL, a corpus-based parser acquisition system. CHILL treats language acquisition as the learning of search-control rules within a logic program. Unlike many current corpus-based approaches that use statistical learning algorithms, CHILL uses techniques from inductive logic programming (ILP) to learn relational representations. CHILL is a very flexible system and has been used to learn parsers that produce syntactic parse trees, case-role analyses, and executable database queries. The reported experiments compare CHILL's performance to that of a more naive application of ILP to parser acquisition. The results show that ILP techniques, as employed in CHILL, are a viable alternative to statistical methods and that the control-rule framework is fundamental to CHILL's success.

Learning the Past Tense of English Verbs Using Inductive Logic Programming

Raymond J. Mooney and Mary Elaine Califf

Symbolic, Connectionist, and Statistical Approaches to Learning for Natural Language Processing

, S. Wermter, E. Riloff and G. Scheler, Eds, Spring Verlag, 1995.

This paper presents results on using a new inductive logic programming method called FOIDL to learn the past tense of English verbs. The past tense task has been widely studied in the context of the symbolic/connectionist debate. Previous papers have presented results using various neural-network and decision-tree learning methods. We have developed a technique for learning a special type of Prolog program called a

first-order decision list

, defined as an ordered list of clauses each ending in a cut. FOIDL is based on FOIL (Quinlan, 1990) but employs intensional background knowledge and avoids the need for explicit negative examples. It is particularly useful for problems that involve rules with specific exceptions, such as the past-tense task. We present results showing that FOIDL learns a more accurate past-tense generator from significantly fewer examples than all other previous methods.

Using Inductive Logic Programming to Automate the Construction of Natural Language Parsers

John M. Zelle

Ph.D. Thesis, Deparment of Computer Sciences, University of Texas at Austin, August, 1995.

122 pages

Also appears as AI Lab Technical Report AI96-249

Designing computer systems to understand natural language input is a difficult task. In recent years there has been considerable interest in corpus-based methods for constructing natural language parsers. These empirical approaches replace hand-crafted grammars with linguistic models acquired through automated training over language corpora. A common thread among such methods to date is the use of propositional or probablistic representations for the learned knowledge. This dissertation presents an alternative approach based on techniques from a subfield of machine learning known as inductive logic programming (ILP). ILP, which investigates the learning of relational (first-order) rules, provides an empirical method for acquiring knowledge within traditional, symbolic parsing frameworks.

This dissertation details the architecture, implementation and evaluation of CHILL a computer system for acquiring natural language parsers by training over corpora of parsed text. CHILL treats language acquisition as the learning of search-control rules within a logic program that implements a shift-reduce parser. Control rules are induced using a novel ILP algorithm which handles difficult issues arising in the induction of search-control heuristics. Both the control-rule framework and the induction algorithm are crucial to CHILL's success.

The main advantage of CHILL over propositional counterparts is its flexibility in handling varied representations. CHILL has produced parsers for various analyses including case-role mapping, detailed syntactic parse trees, and a logical form suitable for expressing first-order database queries. All of these tasks are accomplished within the same framework, using a single, general learning method that can acquire new syntactic and semantic categories for resolving ambiguities.

Experimental evidence from both aritificial and real-world corpora demonstrate that CHILL learns parsers as well or better than previous artificial neural network or probablistic approaches on comparable tasks. In the database query domain, which goes beyond the scope of previous empirical approaches, the learned parser outperforms an existing hand-crafted system. These results support the claim that ILP techniques as implemented in CHILL represent a viable alternative with significant potential advantages over neural-network, propositional, and probablistic approaches to empirical parser construction.

A Comparison of Two Methods Employing Inductive Logic Programming for Corpus-based Parser Constuction

John M. Zelle and Raymond J. Mooney

Working Notes of the IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing

, pp.79-86, Montreal, Quebec, August, 1995.

This paper presents results from recent experiments with CHILL, a corpus-based parser acquisition system. CHILL treats grammar acquisition as the learning of search-control rules within a logic program. Unlike many current corpus-based approaches that use propositional or probabilistic learning algorithms, CHILL uses techniques from inductive logic programming (ILP) to learn relational representations. The reported experiments compare CHILL's performance to that of a more naive application of ILP to parser acquisition. The results show that ILP techniques, as employed in CHILL, are a viable alternative to propositional methods and that the control-rule framework is fundamental to CHILL's success.

Inducing Logic Programs without Explicit Negative Examples

John M. Zelle, Cynthia A. Thompson, Mary Elaine Califf, and Raymond J. Mooney

Proceedings of the Fifth International Workshop on Inductive Logic Programming

, Leuven, Belguim, Sepetember 1995.

This paper presents a method for learning logic programs without explicit negative examples by exploiting an assumption of

output completeness

. A mode declaration is supplied for the target predicate and each training input is assumed to be accompanied by all of its legal outputs. Any other outputs generated by an incomplete program implicitly represent negative examples; however, large numbers of ground negative examples never need to be generated. This method has been incorporated into two ILP systems, CHILLIN and IFOIL, both of which use intensional background knowledge. Tests on two natural language acquisition tasks, case-role mapping and past-tense learning, illustrate the advantages of the approach.

Induction of First-Order Decision Lists: Results on Learning the Past Tense of English Verbs

Raymond J. Mooney and Mary Elaine Califf

Journal of Artificial Intelligence Research

, 3 (1995) pp. 1-24.

This paper presents a method for inducing logic programs from examples that learns a new class of concepts called first-order decision lists, defined as ordered lists of clauses each ending in a cut. The method, called FOIDL, is based on FOIL but employs intensional background knowledge and avoids the need for explicit negative examples. It is particularly useful for problems that involve rules with specific exceptions, such as learning the past-tense of English verbs, a task widely studied in the context of the symbolic/connectionist debate. FOIDL is able to learn concise, accurate programs for this problem from significantly fewer examples than previous methods (both connectionist and symbolic).

Combining Top-Down And Bottom-Up Techniques In Inductive Logic Programming

John M. Zelle, Raymond J. Mooney and Joshua B. Konvisser

Proceedings of the Eleventh International Workshop on Machine Learning

, pp. 343-351, Rutgers, NJ, July 1994. (ML-94)

This paper describes a new method for inducing logic programs from examples which attempts to integrate the best aspects of existing ILP methods into a single coherent framework. In particular, it combines a bottom-up method similar to GOLEM with a top-down method similar to FOIL. It also includes a method for predicate invention similar to CHAMP and an elegant solution to the ``noisy oracle'' problem which allows the system to learn recursive programs without requiring a complete set of positive examples. Systematic experimental comparisons to both GOLEM and FOIL on a range of problems are used to clearly demonstrate the advantages of the approach.

Inducing Deterministic Prolog Parsers From Treebanks: A Machine Learning Approach

John M. Zelle and Raymond J. Mooney

Proceedings of the Twelfth National Conference on AI

, pp. 748-753, Seattle, WA, July 1994. (AAAI-94)

This paper presents a method for constructing deterministic, context-sensitive, Prolog parsers from corpora of parsed sentences. Our approach uses recent machine learning methods for inducing Prolog rules from examples (inductive logic programming). We discuss several advantages of this method compared to recent statistical methods and present results on learning complete parsers from portions of the ATIS corpus.

Integrating ILP and EBL

Raymond J. Mooney and John M. Zelle

SIGART Bulletin

, Volume 5, number 1, Jan. 1994, pp 12-21.

This paper presents a review of recent work that integrates methods from Inductive Logic Programming (ILP) and Explanation-Based Learning (EBL). ILP and EBL methods have complementary strengths and weaknesses and a number of recent projects have effectively combined them into systems with better performance than either of the individual approaches. In particular, integrated systems have been developed for guiding induction with prior knowledge (ML-SMART, FOCL, GRENDEL) refining imperfect domain theories (FORTE, AUDREY, Rx), and learning effective search-control knowledge (AxA-EBL, DOLPHIN).

Combining FOIL and EBG to Speed-Up Logic Programs

John M. Zelle and Raymond J. Mooney

Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence

, pp. 1106-1111, Chambery, France, 1993. (IJCAI-93)

This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm is shown to be an improvement over competing EBL approaches in several domains. Additionally, the algorithm is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.

Learning Semantic Grammars With Constructive Inductive Logic Programming

John M. Zelle and Raymond J. Mooney

Proceedings of the Eleventh National Conference of the American Association for Artificial Intelligence

, pp. 817-822, Washington, D.C. July 1993 (AAAI-93).

Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semantic-grammar acquisition problem can be viewed as the learning of search-control heuristics in a logic program. Appropriate control rules are learned using a new first-order induction algorithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and out-perform previous approaches based on connectionist techniques.

Speeding-up Logic Programs by Combining EBG and FOIL

John M. Zelle and Raymond J. Mooney

Proceedings of the 1992 Machine Learning Workshop on Knowledge Compilation and Speedup Learning

, Aberdeen Scotland, July 1992.

This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm produces not only EBL-like speed up of problem solvers, but is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.

Refinement of First-Order Horn-Clause Domain Theories

Bradley L. Richards and Raymond J. Mooney

Machine Learning

19,2 (1995), pp. 95-131.

Knowledge acquisition is a difficult and time-consuming task, and as error-prone as any human activity. The task of automatically improving an existing knowledge base using learning methods is addressed by a new class of systems performing

theory refinement

. Until recently, such systems were limited to propositional theories. This paper presents a system, FORTE (First-Order Revision of Theories from Examples), for refining first-order Horn-clause theories. Moving to a first-order representation opens many new problem areas, such as logic program debugging and qualitative modelling, that are beyond the reach of propositional systems. FORTE uses a hill-climbing approach to revise theories. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. FORTE has been tested in several domains including logic programming and qualitative modelling.

Learning Relations by Pathfinding

Bradley L. Richards and Raymond J. Mooney

Proceedings of the Tenth National Conference on Artificial Intelligence,

pp. 50-55, San Jose, CA, July 1992.

First-order learning systems (e.g. FOIL, FOCL, FORTE) generally rely on hill-climbing heuristics in order to avoid the combinatorial explosion inherent in learning first-order concepts. However, hill-climbing leaves these systems vulnerable to local maxima and local plateaus. We present a method called relational pathfinding, which has proven highly effective in escaping local maxima and crossing local plateaus. We present our algorithm and provide learning results in two domains: family relationships and qualitative model building.

Information extracted from URL : http://www.cs.utexas.edu/users/ml/theory-rev.html START

Find the information from the current URL:

Theory Refinement for Bayesian Networks with Hidden Variables

Sowmya Ramachandran and Raymond J. Mooney

Proceedings of the Fifteenth International Conference on Machine Learning

, Madison, WI, pp. 454-462, July 1998.

While there has been a growing interest in the problem of learning Bayesian networks from data, no technique exists for learning or revising Bayesian networks with Hidden variables (i.e. variables not represented in the data), that has been shown to be efficient, effective, and scalable through evaluation on real data. The few techniques that exist for revising such networks perform a blind search through a large spaces of revisons, and are therefore computationally expensive. This paper presents BANNER, a technique for using data to revise a given bayesian network with noisy-or and noisy-and nodes, to improve its classification accuracy. The initial network can be derived directly from a logical theory expresssed as propositional rules. BANNER can revise networks with hidden variables, and add hidden variables when necessary. Unlike previous approaches, BANNER employs mechanisms similar to logical theory refinement techniques for using the data to focus the search for effective modifications. Experiments on real-world problems in the domain of molecular biology demonstrate that BANNER can effectively revise fairly large networks to significantly improve their accuracies.

Theory Refinement of Bayesian Networks with Hidden Variables

Sowmya Ramachandran

Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, May, 1998.

129 pages

Also appears as Artificial Intelligence Laboratory Technical Report AI98-265.

Research in theory refinement has shown that biasing a learner with initial, approximately correct knowledge produces more accurate results than learning from data alone. While techniques have been developed to revise logical and connectionist representations, little has been done to revise probabilistic representations. Bayesian networks are well-established as a sound formalism for representing and reasoning with probabilistic knowledge, and are widely used. There has been a growing interest in the problem of learning Bayesian networks from data. However, there is no existing technique for learning or revising Bayesian networks with hidden variables (i.e. variables not represented in the data) that has been shown to be efficient, effective, and scalable through evaluation on real data. The few techniques that exist for revising such networks perform a blind search through a large space of revisions, and are therefore computationally expensive. This dissertation presents Banner, a technique for using data to revise a giv en Bayesian network with Noisy-Or and Noisy-And nodes, to improve its classification accuracy. Additionally, the initial netwrk can be derived directly from a logical theory expressed as propositional Horn-clause rules. Banner can revise networks with hidden variables, and add hidden variables when necessary. Unlike previous approaches to this problem, Banner employs mechanisms similar to those used in logical theory refinement techniques for using the data to focus the search for effective modifications to the network. It can also be used to learn networks with hidden variables from data alone. We also introduce Banner-Pr, a technique for revising the parameters of a Bayesian network with Noisy-Or/And nodes, that directly exploits the computational efficiency afforded by these models. Experiments on several real-world learning problems in domains such as molecular biology and intelligent tutoring systems demonstrate that Banner can effectively and efficiently revise networks to significantly improve their accuracies, and thus learn highly accurate classifiers. Comparisons with the Naive Bayes algorithm show that using the theory refinement approach gives Banner a substantial edge over learning from data alone. We also show that Banner-Pr converges faster and produces more accurate classifiers than an existing algorithm for learning the parameters of a network.

Integrating Abduction and Induction in Machine Learning

Raymond J. Mooney

Working Notes of the IJCAI-97 Workshop on Abduction and Induction in AI

, Nagoya, Japan, pp. 37-42, August, 1997

This paper discusses the integration of traditional abductive and inductive reasoning methods in the development of machine learning systems. In particular, the paper discusses our recent work in two areas: 1) The use of traditional abductive methods to propose revisions during theory refinement, where an existing knowledge base is modified to make it consistent with a set of empirical data; and 2) The use of inductive learning methods to automatically acquire from examples a diagnostic knowledge base used for abductive reasoning.

Parameter Revision Techniques for Bayesian Networks with Hidden Variables: An Experimental Comparison

Sowmya Ramachandran and Raymond J. Mooney

Unpublished technical note, January 1997

Learning Bayesian networks inductively in the presence of hidden variables is still an open problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is not completely solved. In this paper, we present an approach that learns the parameters of a Bayesian network composed of noisy-or and noisy-and nodes by using a gradient descent back-propagation approach similar to that used to train neural networks. For the task of causal inference, it has the advantage of being able to learn in the presence of hidden variables. We compare the performance of this approach with the adaptive probabilistic networks technique on a real-world classification problem in molecular biology, and show that our approach trains faster and learns networks with higher classification accuracy.

Combining Symbolic and Connectionist Learning Methods to Refine Certainty-Factor Rule-Bases

J. Jeffrey Mahoney

Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, May, 1996.

133 pages

Also appears as Artificial Intelligence Laboratory Technical Report AI96-260.

This research describes the system RAPTURE, which is designed to revise rule bases expressed in certainty-factor format. Recent studies have shown that learning is facilitated when biased with domain-specific expertise, and have also shown that many real-world domains require some form of probabilistic or uncertain reasoning in order to successfully represent target concepts. RAPTURE was designed to take advantage of both of these results.

Beginning with a set of certainty-factor rules, along with accurately-labelled training examples, RAPTURE makes use of both symbolic and connectionist learning techniques for revising the rules, in order that they correctly classify all of the training examples. A modified version of backpropagation is used to adjust the certainty factors of the rules, ID3's information-gain heuristic is used to add new rules, and the Upstart algorithm is used to create new hidden terms in the rule base.

Results on refining four real-world rule bases are presented that demonstrate the effectiveness of this combined approach. Two of these rule bases were designed to identify particular areas in strands of DNA, one is for identifying infectious diseases, and the fourth attempts to diagnose soybean diseases. The results of RAPTURE are compared with those of backpropagation, C4.5, KBANN, and other learning systems. RAPTURE generally produces sets of rules that are more accurate that these other systems, often creating smaller sets of rules and using less training time.

A Novel Application of Theory Refinement to Student Modeling

Paul Baffes and Raymond J. Mooney

Best Paper Award

Proceedings of the Thirteenth National Conference on Aritificial Intelligence

, pp. 403-408, Portland, OR, August, 1996. (AAAI-96)

Theory refinement systems developed in machine learning automatically modify a knowledge base to render it consistent with a set of classified training examples. We illustrate a novel application of these techniques to the problem of constructing a student model for an intelligent tutoring system (ITS). Our approach is implemented in an ITS authoring system called Assert which uses theory refinement to introduce errors into an initially correct knowledge base so that it models incorrect student behavior. The efficacy of the approach has been demonstrated by evaluating a tutor developed with Assert with 75 students tested on a classification task covering concepts from an introductory course on the C++ programming language. The system produced reasonably accurate models and students who received feedback based on these models performed significantly better on a post test than students who received simple reteaching.

Refinement-Based Student Modeling and Automated Bug Library Construction

Paul Baffes and Raymond Mooney

Journal of Artificial Intelligence in Education

, 7, 1 (1996), pp. 75-116.

A critical component of model-based intelligent tutoring sytems is a mechanism for capturing the conceptual state of the student, which enables the system to tailor its feedback to suit individual strengths and weaknesses. To be useful such a modeling technique must be

practical

, in the sense that models are easy to construct, and

effective

, in the sense that using the model actually impacts student learning. This research presents a new student modeling technique which can automatically capture novel student errors using only correct domain knowledge, and can automatically compile trends across multiple student models. This approach has been implemented as a computer program, ASSERT, using a machine learning technique called

theory refinement

, which is a method for automatically revising a knowledge base to be consistent with a set of examples. Using a knowledge base that correctly defines a domain and examples of a student's behavior in that domain, ASSERT models student errors by collecting any refinements to the correct knowledege base which are necessary to account for the student's behavior. The efficacy of the approach has been demonstrated by evaluating ASSERT using 100 students tested on a classification task covering concepts from an introductory course on the C++ programming language. Students who received feedback based on the models automatically generated by ASSERT performed significantly better on a post test than students who received simple teaching.

Revising Bayesian Network Parameters Using Backpropagation

Sowmya Ramachandran and Raymond J. Mooney

Proceedings of the International Conference on Neural Networks (ICNN-96)

, Special Session on Knowledge-Based Artificial Neural Networks, Washington DC, June 1996.

The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique.

Refinement of Bayesian Networks by Combining Connectionist and Symbolic Techniques

Sowmya Ramachandran

Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin, 1995.

Bayesian networks provide a mathematically sound formalism for representing and reasoning with uncertain knowledge and are as such widely used. However, acquiring and capturing knowledge in this framework is difficult. There is a growing interest in formulating techniques for learning Bayesian networks inductively. While the problem of learning a Bayesian network, given complete data, has been explored in some depth, the problem of learning networks with unobserved causes is still open. In this proposal, we view this problem from the perspective of theory revision and present a novel approach which adapts techniques developed for revising theories in symbolic and connectionist representations. Thus, we assume that the learner is given an initial approximate network (usually obtained from a expert). Our technique inductively revises the network to fit the data better. Our proposed system has two components: one component revises the parameters of a Bayesian network of known structure, and the other component revises the structure of the network. The component for parameter revision maps the given Bayesian network into a multi-layer feedforward neural network, with the parameters mapped to weights in the neural network, and uses standard backpropagation techniques to learn the weights. The structure revision component uses qualitative analysis to suggest revisions to the network when it fails to predict the data accurately. The first component has been implemented and we will present results from experiments on real world classification problems which show our technique to be effective. We will also discuss our proposed structure revision algorithm, our plans for experiments to evaluate the system, as well as some extensions to the system.

Automatic Student Modeling and Bug Library Construction using Theory Refinement

Paul T. Baffes

Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, December, 1994.

The history of computers in education can be characterized by a continuing effort to construct intelligent tutorial programs which can adapt to the individual needs of a student in a one-on-one setting. A critical component of these intelligent tutorials is a mechanism for modeling the conceptual state of the student so that the system is able to tailor its feedback to suit individual strengths and weaknesses. The primary contribution of this research is a new student modeling technique which can automatically capture novel student errors using only correct domain knowledge, and can automatically compile trends across multiple student models into bug libraries. This approach has been implemented as a computer program, ASSERT, using a machine learning technique called theory refinement which is a method for automatically revising a knowledge base to be consistent with a set of examples. Using a knowledge base that correctly defines a domain and examples of a student's behavior in that domain, ASSERT models student errors by collecting any refinements to the correct knowledge base which are necessary to account for the student's behavior. The efficacy of the approach has been demonstrated by evaluating ASSERT using 100 students tested on a classification task using concepts from an introductory course on the C++ programming language. Students who received feedback based on the models automatically generated by ASSERT performed significantly better on a post test than students who received simple reteaching.

Comparing Methods For Refining Certainty Factor Rule-Bases

J. Jeffrey Mahoney and Raymond J. Mooney

Proceedings of the Eleventh International Workshop on Machine Learning

, pp. 173-180, Rutgers, NJ, July 1994. (ML-94)

This paper compares two methods for refining uncertain knowledge bases using propositional certainty-factor rules. The first method, implemented in the RAPTURE system, employs neural-network training to refine the certainties of existing rules but uses a symbolic technique to add new rules. The second method, based on the one used in the KBANN system, initially adds a complete set of potential new rules with very low certainty and allows neural-network training to filter and adjust these rules. Experimental results indicate that the former method results in significantly faster training and produces much simpler refined rule bases with slightly greater accuracy.

Modifying Network Architectures For Certainty-Factor Rule-Base Revision

J. Jeffrey Mahoney and Raymond J. Mooney

Proceedings of the International Symposium on Integrating Knowledge and Neural Heuristics 1994

, pp. 75-85, Pensacola, FL, May 1994. (ISIKNH-94)

This paper describes RAPTURE --- a system for revising probabilistic rule bases that converts symbolic rules into a connectionist network, which is then trained via connectionist techniques. It uses a modified version of backpropagation to refine the certainty factors of the rule base, and uses ID3's information-gain heuristic (Quinlan) to add new rules. Work is currently under way for finding improved techniques for modifying network architectures that include adding hidden units using the UPSTART algorithm (Frean). A case is made via comparison with fully connected connectionist techniques for keeping the rule base as close to the original as possible, adding new input units only as needed.

Extending Theory Refinement to M-of-N Rules

Paul T. Baffes and Raymond J. Mooney

Informatica

, 17 (1993), pp. 387-397.

In recent years, machine learning research has started addressing a problem known as

theory refinement

. The goal of a theory refinement learner is to modify an incomplete or incorrect rule base, representing a domain theory, to make it consistent with a set of input training examples. This paper presents a major revision of the EITHER propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend EITHER to refine MofN rules. The resulting algorithm, Neither (New EITHER), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the MofN format. To demonstrate the advantages of NEITHER, we present experimental results from two real-world domains.

Learning to Model Students: Using Theory Refinement to Detect Misconceptions

Paul T. Baffes

Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin, 1993.

A new student modeling system called ASSERT is described which uses domain independent learning algorithms to model unique student errors and to automatically construct bug libraries. ASSERT consists of two learning phases. The first is an application of theory refinement techniques for constructing student models from a correct theory of the domain being tutored. The second learning cycle automatically constructs the bug library by extracting common refinements from multiple student models which are then used to bias future modeling efforts. Initial experimental data will be presented which suggests that ASSERT is a more effective modeling system than other induction techniques previously explored for student modeling, and that the automatic bug library construction significantly enhances subsequent modeling efforts.

Symbolic Revision of Theories With M-of-N Rules

Paul T. Baffes and Raymond J. Mooney

Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence

, pp. 1135-1140, Chambery, France, 1993. (IJCAI-93)

This paper presents a major revision of the EITHER propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend EITHER to refine M-of-N rules. The resulting algorithm, NEITHER (New EITHER), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the M-of-N format. To demonstrate the advantages of NEITHER, we present preliminary experimental results comparing it to EITHER and various other systems on refining the DNA promoter domain theory.

Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases

J. Jeffrey Mahoney and Raymond J. Mooney

Connection Science

, 5 (1993), pp. 339-364. (Special issue on Architectures for Integrating Neural and Symbolic Processing)

This paper describes Rapture --- a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a Mycin-style rule base and it uses ID3's information gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods.

Refinement of First-Order Horn-Clause Domain Theories

Bradley L. Richards and Raymond J. Mooney

Machine Learning

19,2 (1995), pp. 95-131.

Knowledge acquisition is a difficult and time-consuming task, and as error-prone as any human activity. The task of automatically improving an existing knowledge base using learning methods is addressed by a new class of systems performing

theory refinement

. Until recently, such systems were limited to propositional theories. This paper presents a system, FORTE (First-Order Revision of Theories from Examples), for refining first-order Horn-clause theories. Moving to a first-order representation opens many new problem areas, such as logic program debugging and qualitative modelling, that are beyond the reach of propositional systems. FORTE uses a hill-climbing approach to revise theories. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. FORTE has been tested in several domains including logic programming and qualitative modelling.

Combining Symbolic and Neural Learning to Revise Probabilistic Theories

J. Jeffrey Mahoney & Raymond J. Mooney

Proceedings of the 1992 Machine Learning Workshop on Integrated Learning in Real Domains

, Aberdeen Scotland, July 1992.

This paper describes RAPTURE --- a system for revising probabilistic theories that combines symbolic and neural-network learning methods. RAPTURE uses a modified version of backpropagation to refine the certainty factors of a Mycin-style rule-base and it uses ID3's information gain heuristic to add new rules. Results on two real-world domains demonstrate that this combined approach performs as well or better than previous methods.

Using Theory Revision to Model Students and Acquire Stereotypical Errors

Paul T. Baffes and Raymond J. Mooney

Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society

, pp. 617-622, Bloomington, IN, July 1992.

Student modeling has been identified as an important component to the long term development of Intelligent Computer-Aided Instruction (ICAI) systems. Two basic approaches have evolved to model student misconceptions. One uses a static, predefined library of user bugs which contains the misconceptions modeled by the system. The other uses induction to learn student misconceptions from scratch. Here, we present a third approach that uses a machine learning technique called theory revision. Using theory revision allows the system to automatically construct a bug library for use in modeling while retaining the flexibility to address novel errors.

A Preliminary PAC Analysis of Theory Revision

Raymond J. Mooney

March 1992

Computational Learning Theory and Natural Learning Systems

, Vol. 3, T. Petsche, S. Judd, and S. Hanson, Eds., MIT Press, 1995, pp. 43-53.

This paper presents a preliminary analysis of the sample complexity of theory revision within the framework of PAC (Probably Approximately Correct) learnability theory. By formalizing the notion that the initial theory is ``close'' to the correct theory we show that the sample complexity of an optimal propositional Horn-clause theory revision algorithm is $O( ( \ln 1 / \delta + d \ln (s_0 + d + n) ) / \epsilon)$, where $d$ is the {\em syntactic distance} between the initial and correct theories, $s_0$ is the size of initial theory, $n$ is the number of observable features, and $\epsilon$ and $\delta$ are the standard PAC error and probability bounds. The paper also discusses the problems raised by the computational complexity of theory revision.

Automated Debugging of Logic Programs via Theory Revision

Raymond J. Mooney & Bradley L. Richards

Proceedings of the Second International Workshop on Inductive Logic Programming

, Tokyo, Japan, June 1992.

This paper presents results on using a theory revision system to automatically debug logic programs. FORTE is a recently developed system for revising function-free Horn-clause theories. Given a theory and a set of training examples, it performs a hill-climbing search in an attempt to minimally modify the theory to correctly classify all of the examples. FORTE makes use of methods from propositional theory revision, Horn-clause induction (FOIL), and inverse resolution. The system has has been successfully used to debug logic programs written by undergraduate students for a programming languages course.

Batch versus Incremental Theory Refinement

Raymond J. Mooney

Proceedings of AAAI Spring Symposium on Knowledge Assimilation

, Standford, CA, March, 1992.

Most existing theory refinement systems are not incremental. However, any theory refinement system whose input and output theories are compatible can be used to incrementally assimilate data into an evolving theory. This is done by continually feeding its revised theory back in as its input theory. An incremental batch approach, in which the system assimilates a batch of examples at each step, seems most appropriate for existing theory revision systems. Experimental results with the EITHER theory refinement system demonstrate that this approach frequently increases efficiency without significantly decreasing the accuracy or the simplicity of the resulting theory. However, if the system produces bad initial changes to the theory based on only small amount of data, these bad revisions can ``snowball'' and result in an overall decrease in performance.

A Multistrategy Approach to Theory Refinement

Raymond J. Mooney & Dirk Ourston

Machine Learning: A Multistrategy Approach

, Vol. IV, R.S. Michalski & G. Teccuci (eds.), pp.141-164, Morgan Kaufman, San Mateo, CA, 1994.

This chapter describes a multistrategy system that employs independent modules for deductive, abductive, and inductive reasoning to revise an arbitrarily incorrect propositional Horn-clause domain theory to fit a set of preclassified training instances. By combining such diverse methods, EITHER is able to handle a wider range of imperfect theories than other theory revision systems while guaranteeing that the revised theory will be consistent with the training data. EITHER has successfully revised two actual expert theories, one in molecular biology and one in plant pathology. The results confirm the hypothesis that using a multistrategy system to learn from both theory and data gives better results than using either theory or data alone.

Theory Refinement Combining Analytical and Empirical Methods

Dirk Ourston and Raymond J. Mooney

Artificial Intelligence

, 66 (1994), pp. 311--344.

This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are

focused

, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis.

Improving Shared Rules in Multiple Category Domain Theories

Dirk Ourston and Raymond J. Mooney

Proceedings of the Eighth International Machine Learning Workshop

, pp. 534-538, Evanston, IL, June 1991.

This paper presents an approach to improving the classification performance of a multiple category theory by correcting intermediate rules which are shared among the categories. Using this technique, the performance of a theory in one category can be improved through training in an entirely different category. Examples of the technique are presented and experimental results are given.

Constructive Induction in Theory Refinement

Raymond J. Mooney and Dirk Ourston

Proceedings of the Eighth International Machine Learning Workshop

, pp. 178-182, Evanston, IL. June 1991.

This paper presents constructive induction techniques recently added to the EITHER theory refinement system. These additions allow EITHER to handle arbitrary gaps at the ``top,'' ``middle,'' and/or ``bottom'' of an incomplete domain theory.

Intermediate concept utilization

employs existing rules in the theory to derive higher-level features for use in induction.

Intermediate concept creation

employs inverse resolution to introduce new intermediate concepts in order to fill gaps in a theory that span multiple levels. These revisions allow EITHER to make use of imperfect domain theories in the ways typical of previous work in both constructive induction and theory refinement. As a result, EITHER is able to handle a wider range of theory imperfections than does any other existing theory refinement system.

Theory Refinement with Noisy Data

Raymond J. Mooney and Dirk Ourston

Technical Report AI 91-153, Artificial Intelligence Lab, University of Texas at Austin, March 1991.

This paper presents a method for revising an approximate domain theory based on noisy data. The basic idea is to avoid making changes to the theory that account for only a small amount of data. This method is implemented in the EITHER propositional Horn-clause theory revision system. The paper presents empirical results on artificially corrupted data to show that this method successfully prevents over-fitting. In other words, when the data is noisy, performance on novel test data is considerably better than revising the theory to completely fit the data. When the data is not noisy, noise processing causes no significant degradation in performance. Finally, noise processing increases efficiency and decreases the complexity of the resulting theory.

Information extracted from URL : http://www.cs.utexas.edu/users/ml/speedup.html START

Find the information from the current URL:

Using Multi-Strategy Learning to Improve Planning Efficiency and Quality

Tara A. Estlin

Ph.D. Thesis, Department of Computer Sciences, University of Texas at Austin, May 1998.

117 pages

Also appears as AI Laboratory Technical Report AI98-269

Artificial intelligence planning systems have become an important tool for automating a wide variety of tasks. However, even the most current planning algorithms suffer from two major problems. First, they often require infeasible amounts of computation time to solve problems in most domains. And second, they are not guaranteed to return the best solution to a planning problem, and in fact can sometimes return very low-quality solutions. One way to address these problems is to provide a planning system with domain-specific control knowledge, which helps guide the planner towards more promising search paths. Machine learning techniques enable a planning system to automatically acquire search-control knowledge for different applications. A considerable amount of planning and learning research has been devoted to acquiring rules that improve planning efficiency, also known as speedup learning. Much less work has been down in learning knowledge to improve the quality of plans, even though this is an essential feature for many real-world planning systems. Furthermore, even less research has been done in acquiring control knowledge to improve both these metrics.

The learning system presented in this dissertation, called SCOPE, is a unique approach to learning control knowledge for planning. SCOPE learns domain-specific control rules for a planner that improve both planning efficiency and plan quality, and it is one of the few systems that can learn control knowledge for partial-order planning. SCOPE's architecture integrates explanation-based learning (EBL) with techniques from inductive logic programming. Specifically, EBL is used to constrain an inductive search for control heuristics that help a planner choose between competing plan refinements. Since SCOPE uses a very flexible training approach, its learning algorithm can be easily focused to prefer search paths that are better for particular evaluation metrics. SCOPE is extensively tested on several planning domains, including a logistics transportation domain and a production manufacturing domain. In these tests, it is shown to significantly improve both planning efficiency and quality and is shown to be more robust than a competing approach.

Learning to Improve both Efficiency and Quality of Planning

Tara A. Estlin and Raymond J. Mooney

Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence (IJCAI-97)

Nagoya, Japan, pp. 1227-1232, August, 1997.

Most research in learning for planning has concentrated on efficiency gains. Another important goal is improving the quality of final plans. Learning to improve plan quality has been examined by a few researchers, however, little research has been done learning to improve both efficiency and quality. This paper explores this problem by using the SCOPE learning system to acquire control knowledge that improves on both of these metrics. Since SCOPE uses a very flexible training approach, we can easily focus it's learning algorithm to prefer search paths that are better for particular evaluation metrics. Experimental results show that SCOPE can significantly improve both the quality of final plans and overall planning efficiency.

Integrating EBL and ILP to Acquire Control Rules for Planning

Tara A. Estlin and Raymond J. Mooney

Proceedings of the Third International Workshop on Multi-Strategy Learning

, pp. 271-279, Harpers Ferry, WV, May 1996. (MSL-96).

Most approaches to learning control information in planning systems use explanation-based learning to generate control rules. Unfortunately, EBL alone often produces overly complex rules that actually decrease planning efficiency. This paper presents a novel learning approach for control knowledge acquisition that integrates explanation-based learning with techniques from inductive logic programming. EBL is used to constrain an inductive search for selection heuristics that help a planner choose between competing plan refinements. SCOPE is one of the few systems to address learning control information in the newer partial-order planners. Specifically, SCOPE learns domain-specific control rules for a version of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains.

Multi-Strategy Learning of Search Control for Partial-Order Planning

Tara A. Estlin and Raymond J. Mooney

Proceedings of the Thirteenth National Conference on Aritificial Intelligence

, pp. 843-848, Portland, OR, August, 1996. (AAAI-96)

Most research in planning and learning has involved linear, state-based planners. This paper presents SCOPE, a system for learning search-control rules that improve the performance of a partial-order planner. SCOPE integrates explanation-based and inductive learning techniques to acquire control rules for a partial-order planner. Learned rules are in the form of selection heuristics that help the planner choose between competing plan refinements. Specifically, SCOPE learns domain-specific control rules for a version of the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains.

Integrating Explanation-Based and Inductive Learning Techniques to Acquire Search-Control for Planning

Tara A. Estlin

Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin, 1996. (Technical Report AI96-250)

Planning systems have become an important tool for automating a wide variety of tasks. Control knowledge guides a planner to find solutions quickly and is crucial for efficient planning in most domains. Machine learning techniques enable a planning system to automatically acquire domain-specific search-control knowledge for different applications. Past approaches to learning control information have usually employed explanation-based learning (EBL) to generate control rules. Unfortunately, EBL alone often produces overly complex rules that actually decrease rather than improve overall planning efficiency. This paper presents a novel learning approach for control knowledge acquisition that integrates explanation-based learning with techniques from inductive logic programming. In our learning system SCOPE, EBL is used to constrain an inductive search for control heuristics that help a planner choose between competing plan refinements. SCOPE is one of the few systems to address learning control information for newer, partial-order planners. Specifically, this proposal describes how SCOPE learns domain-specific control rules for the UCPOP planning algorithm. The resulting system is shown to produce significant speedup in two different planning domains, and to be more effective than a pure EBL approach.

Future research will be performed in three main areas. First, SCOPE's learning algorithm will be extended to include additional techniques such as constructive induction and rule utility analysis. Second, SCOPE will be more thoroughly tested; several real-world planning domains have been identified as possible testbeds, and more in-depth comparisons will be drawn between SCOPE and other competing approaches. Third, SCOPE will be implemented in a different planning system in order to test its portability to other planning algorithms. This work should demonstrate that machine-learning techniques can be a powerful tool in the quest for tractable real-world planning.

Hybrid Learning of Search Control for Partial-Order Planning

Tara A. Estlin and Raymond J. Mooney

New Directions in AI Planning

, M. Ghallab and A. Milani, Eds, IOS Press, 1996, pp. 129-140.

This paper presents results on applying a version of the DOLPHIN search-control learning system to speed up a partial-order planner. DOLPHIN integrates explanation-based and inductive learning techniques to acquire effective clause-selection rules for Prolog programs. A version of the UCPOP partial-order planning algorithm has been implemented as a Prolog program and DOLPHIN used to automatically learn domain-specific search control rules that help eliminate backtracking. The resulting system is shown to produce significant speedup in several planning domains.

Integrating ILP and EBL

Raymond J. Mooney and John M. Zelle

SIGART Bulletin

, Volume 5, number 1, Jan. 1994, pp 12-21.

This paper presents a review of recent work that integrates methods from Inductive Logic Programming (ILP) and Explanation-Based Learning (EBL). ILP and EBL methods have complementary strengths and weaknesses and a number of recent projects have effectively combined them into systems with better performance than either of the individual approaches. In particular, integrated systems have been developed for guiding induction with prior knowledge (ML-SMART, FOCL, GRENDEL) refining imperfect domain theories (FORTE, AUDREY, Rx), and learning effective search-control knowledge (AxA-EBL, DOLPHIN).

Learning Search-Control Heuristics for Logic Programs: Applications to Speedup Learning and Language Acquisition

John M. Zelle

Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin, 1993.

This paper presents a general framework, learning search-control heuristics for logic programs, which can be used to improve both the efficiency and accuracy of knowledge-based systems expressed as definite-clause logic programs. The approach combines techniques of explanation-based learning and recent advances in inductive logic programming to learn clause-selection heuristics that guide program execution. Two specific applications of this framework are detailed: dynamic optimization of Prolog programs (improving efficiency) and natural language acquisition (improving accuracy). In the area of program optimization, a prototype system, DOLPHIN, is able to transform some intractable specifications into polynomial-time algorithms, and outperforms competing approaches in several benchmark speedup domains. A prototype language acquisition system, CHILL, is also described. It is capable of automatically acquiring semantic grammars, which uniformly incorprate syntactic and semantic constraints to parse sentences into case-role representations. Initial experiments show that this approach is able to construct accurate parsers which generalize well to novel sentences and significantly outperform previous approaches to learning case-role mapping based on connectionist techniques. Planned extensions of the general framework and the specific applications as well as plans for further evaluation are also discussed.

Combining FOIL and EBG to Speed-Up Logic Programs

John M. Zelle and Raymond J. Mooney

Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence

, pp. 1106-111, Chambery, France, 1993. (IJCAI-93)

This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm is shown to be an improvement over competing EBL approaches in several domains. Additionally, the algorithm is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.

Speeding-up Logic Programs by Combining EBG and FOIL

John M. Zelle and Raymond J. Mooney

Proceedings of the 1992 Machine Learning Workshop on Knowledge Compilation and Speedup Learning

, Aberdeen Scotland, July 1992.

This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm produces not only EBL-like speed up of problem solvers, but is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.
 
 
 
 

20)

http://www.cs.utexas.edu/users/ndale/

Information extracted from URL : http://www.cs.utexas.edu/users/ndale/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/ndale/books.htm
 
 
 
 

21)

http://www.cs.utexas.edu/users/novak/

Information extracted from URL : http://www.cs.utexas.edu/users/novak/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/novak/papers.html

Find the information from the current URL:

Information extracted from URL : http://www.cs.utexas.edu/users/novak/autop.html START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/novak/papers.html

Information extracted from URL : http://www.cs.utexas.edu/users/novak/physics.html START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/novak/papers.html
 
 
 
 

22)

http://www.cs.utexas.edu/users/plaxton/

Information extracted from URL : http://www.cs.utexas.edu/users/plaxton/ START

Find links which contain the Publication information:

http://www.cs.utexas.eduhtml/pubs.html

Information extracted from URL : http://www.cs.utexas.edu/users/vlr/sac.html START

Information extracted from URL : http://www.cs.utexas.edu/users/UTCS/report/jan1999/plaxton.html START

Find the information from the current URL:

M. R. Korupolu, C. G. Plaxton, and R. Rajaraman, "Placement algorithms for hierarchical cooperative caching," in Proceedings of the 10th Annual ACM-SIAM Symposium on Discrete Algorithms, January 1999.

N. S. Arora, R. D. Blumofe, and C. G. Plaxton, "Thread scheduling for multiprogrammed multiprocessors," in Proceedings of the 10th Annual ACM Symposium on Parallel Algorithms and Architectures, June 1998, pp. 119­129.

M. R. Korupolu, C. G. Plaxton, and R. Rajaraman, "Analysis of a local search heuristic for facility location problems," in Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, CA, January 1998, pp. 1-10.

F. T. Leighton and C. G. Plaxton, "Hypercubic sorting networks," SIAM Journal on Computing, vol. 27, pp. 1­47, 1998.

P. D. MacKenzie, C. G. Plaxton, and R. Rajaraman, "On contention resolution protocols and associated probabilistic phenomena," Journal of the ACM, vol. 45, pp. 324­378, 1998.

C. G. Plaxton and T. Suel, "Lower bounds for Shellsort," Journal of Algorithms, vol. 23, pp. 221­240, 1997.
 
 
 
 

23)

http://www.cs.utexas.edu/users/porter/

Information extracted from URL : http://www.cs.utexas.edu/users/porter/ START

Find the information from the current URL:

P. Clark and B. Porter. Building Concept Representations from Reusable Components. In AAAI'97, pages 369-376, CA:AAAI Press, 1997.

Best Paper Award

. (

Abstract

and

postscript

)

J. Rickel and B. Porter, Automated Modeling of Complex Systems to Answer Prediction Questions, Artificial Intelligence Journal, 93(1-2), pp. 201-260, 1997. (

postscript

)

J. Lester and B. Porter, Developing and Empirically Evaluating Robust Explanation Generators: The KNIGHT Experiments, Computational Linguistics Journal, 23(1), pp. 65--101, 1997. (

postscript

)

J. Lester and B. Porter, Scaling Up Explanation Generation: Large-Scale Knowledge Bases and Empirical Studies, Proceedings of the National Conference on Artificial Intelligence, 1996. (

postscript

)

J. Rickel and B. Porter (1994), Automated Modeling for Answering Prediction Questions: Selecting the Time Scale and System Boundary,

AAAI-94

, pp. 1191-1198, Cambridge, MA: AAIT/MIT Press. (

Abstract

and

postscript

).

L. Acker and B. Porter, Extracting Viewpoints from Knowledge Bases, Proceedings of the National Conference on Artificial Intelligence, pp. 547-552, 1994. (

postscript

)

K. Branting and B. Porter (1991). Rules and Precedents as Complementary Warrants,

AAAI-91

, pp. 3-9. (

Abstract

).

B. Porter, R. Bareiss, and R. Holte (1990). Concept Learning and Heuristic Classification in Weak-Theory Domains,

Artificial Intelligence Journal

, v45 (nos. 1-2), pp. 229-264. (

Abstract

and

postscript

).

Information extracted from URL : http://www.cs.utexas.edu/users/mfkb/index.html START

Find the information from the current URL:

Click

here

to see some selected publications from our group.
 
 
 
 

24)

http://www.cs.utexas.edu/users/rdb/

Information extracted from URL : http://www.cs.utexas.edu/users/rdb/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/rdb/papers.html
 
 
 
 

25)

http://www.cs.utexas.edu/users/risto/

Information extracted from URL : http://www.cs.utexas.edu/users/risto/ START

Information extracted from URL : http://www.cs.utexas.edu/users/nn START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/nn/pages/publications/publications.html
 
 
 
 

26)

http://www.cs.utexas.edu/users/rvdg/

Information extracted from URL : http://www.cs.utexas.edu/users/rvdg/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/rvdg/journal.html

Find the information from the current URL:

Books

Journal Publications

Conference Publications

Technical Reports

Tutorials

Information extracted from URL : http://www.cs.utexas.edu/users/plapack START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/plapack/new/pubs.html

Information extracted from URL : http://www.ticam.utexas.edu/Groups/Composites/ START
 
 
 
 

27)

http://www.cs.utexas.edu/users/vin/

Information extracted from URL : http://www.cs.utexas.edu/users/vin/ START
 
 
 
 
28)

http://www.cs.utexas.edu/users/vl/

Information extracted from URL : http://www.cs.utexas.edu/users/vl/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/vl/papers.html

Find the information from the current URL:

On scientists and science

My favorite stories

Three silly jokes

Steven Weinberg on scientific revolutions

Isaiah Berlin on pluralism

Donald Kagan on national honor

Václav Havel on the temptations of political power

Edsger W. Dijkstra on universities
 
 
 
 

29)

http://www.cs.utexas.edu/users/vlr/

Information extracted from URL : http://www.cs.utexas.edu/users/vlr/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/vlr/pub.html
 
 
 
 

30)

http://www.cs.utexas.edu/users/walkerh/

Information extracted from URL : http://www.cs.utexas.edu/users/walkerh/ START
 
 
 
 
31)

http://www.cs.utexas.edu/users/wilson/

Information extracted from URL : http://www.cs.utexas.edu/users/wilson/ START

Information extracted from URL : http://www.cs.utexas.edu/users/oops/ START

Find links which contain the Publication information:

http://www.cs.utexas.edu/users/oops/papers.html

Information extracted from URL : http://www.cs.utexas.edu/ START

Information extracted from URL : http://www.utexas.edu START
 
 
 
 

32)

http://www.cs.utexas.edu/users/young/

Information extracted from URL : http://www.cs.utexas.edu/users/young/ START

Find the information from the current URL:

D. M. Young, "Garrett Birkhoff and Applied Mathematics," in Notices of the American Mathematical Society, vol. 44, no. 11, pp. 1446-1450, 1997.

D. M. Young and D. R. Kincaid, "A new class of parallel alternating-type iterative methods," Journal of Computational and Applied Mathematics, vol. 74, pp. 331-344, 1996.

D. M. Young, S. Xiao, and K. Baker, "Periodically generated iterative methods for solving elliptic equations," Applied Numerical Mathematics, vol. 19, pp. 375-387, 1995.

S. Xiao and D. M. Young, "Multiple coarse grid multigrid methods for solving elliptic problems," in NASA Conference Proceedings, Seventh Copper Mountain Conference on Multigrid Methods, Nelson, Manteuffel et al. (Eds.), 3339, part 2, 771-791, 1996.

D. M. Young and B. Vona, "Parallel multilevel methods," Studies in Computer Science, Rice and Demillo (Eds.), Plenum Press, New York, 1994.

D. M. Young and B. Vona, "On the use of rational iterative methods for solving large sparse linear systems," Applied Numerical Mathematics, vol. 10, pp. 261-278, 1992.