AServe defines some functions for getting web pages. The simplest to use is do-http-request in the net.aserve.client package.
To try it, type
(net.aserve.client:do-http-request url)
to read a web page. url needs to be a string containing a complete URL. Pick a page that's not too long. If it succeeds, you should get back a string with the contents of the entire page. If you get an error message, try another page. I've had some trouble with URL's that leave out the file name, though an address with search keys, such as you see on Google, should work.
If you are not using Allegro Lisp, get the open source version of the HTML parser from http://www.cliki.net/CL-HTML-Parse. Because you already should have portable AllegroServe installed, you only need to compile and load cl-html-parse/dev/cl-html.parse.lisp. Be sure to compile this. It takes forever to run otherwise.
Now do
(net.html.parser:parse-html (net.aserve.client:do-http-request ...))
to test parsing the HTML returned by the URL you tried before. You should get back a list version of the page, i.e., a version you can process with standard Lisp list functions.
Email to c-riesbeck@northwestern.edu
As before, the best source of help will be the CS 325 news server.
Comments? Send mail to Chris Riesbeck. Put EECS 325 in the Subject.