Newsgroups: comp.lang.prolog
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel-eecis!gatech!news.mathworks.com!cam-news-hub1.bbnplanet.com!su-news-hub1.bbnplanet.com!news.bbnplanet.com!pacbell.com!amdahl.com!ogma.ccc.amdahl.com!netnews
From: "Trecom" <asperd@trecom.com>
Subject: Re: How provide input to parser
X-Nntp-Posting-Host: 159.199.162.17
Message-ID: <01bc228c$cfe3a620$11a2c79f@Asperd.trecom>
Sender: netnews@ccc.amdahl.com (Usenet Administration)
Organization: Trecom Business Systems
X-Newsreader: Microsoft Internet News 4.70.1155
References: <01bc202e$fa39aec0$11a2c79f@Asperd.trecom> <3311AA5B.41C67EA6@bnr.co.uk>
Date: Mon, 24 Feb 1997 19:53:52 GMT
Lines: 15

Thanks.  I'm beginning to see how this would work.  I think what has been
throwing me is that I've been working with Lex and Yacc, and so have been
thinking more in terms of parsers getting each token as needed, rather than
keeping a store of them in a list.  By looking for semi-colons I would
never create a disastrously long list, which is my primary concern.  But, I
could perhaps have the tokenizer maintain a list that is of some arbitrary
length greater than any of the number of tokens needed by any single
production (say 20).  Tokens would be pulled off of the list as need, 
Then, I could routinely call the tokenizer after each production is
processed, to get however many more tokens are necessary to refill the list
to its arbitrary limit.  There will be a processing penalty for maintaining
the list, of course.  Your approach requires no counting and would probably
work for compound statements contained within curly braces.    


