Page Scrape

Here we do not mean the simple scraping of a web page by a script. Instead we are talking about the idea that each page is __responsible__ for determining its own context. This context - let's call it for what it is - is a graph of meaning, and meanings neighbours for that page.

Ward Cunningham writes this beautifully:

A page viewed scrapes its neighbors. We examine the sites mentioned in any page and go fetch their sitemaps. This is the scrape step. It need not be directed by readers.

We suggest that every server should scrape the neighborhood of the pages it serves before they are viewed by readers.

A server could scrape further to the neighborhood of its neighbors or beyond. For a small server with few sites and few pages within those sites a deeper search might make sense, or not.

When we write we structure a set of thoughts, some of them our own, and we relate them in organisations that are more or less logical according to the author. As we read the author invites us to traverse this graph of thought, whether it be linear or not.

The medium we seek to enable, should be expressive enough to allow the author to represent these thoughts how she feels fit, the viewer to experience these thoughts mingled with their own, and code to explore such links in a rich set of meaningful ways. This cluster of needs is not something that suits a rigid structure. It cannot start with an ontology.

A page is in search of friends, as any idea seeks to both stand by herself, as well as seek out relationships that affect the world. Ideas are social, and yet they need to be independent enough to survive the passage of time and transmission.

A graph, indeed a directed graph is a data structure rich enough to express much of our pages desire to both connect and express autonomy. Each page is both a story and Some form of graph which is yet to be determined, or rather whose determination is required to evolve over time.

And yet our page scrape, must begin humble. See also is one such start, as is the more conventional List of Links from each page. Next we have our prototype in the Graph Plugin, and when we seek more structure still an Argument Map which is yet another interpretation of a pages internal and external relations. In this one paragraph we can begin to see an evolution of how each page might become responsible for its own scrape.

# See also