Andrew Wilkinson

Random Ramblings on Programming

Posts Tagged ‘django

Django ImportError Hiding

leave a comment »

Hidden CatA little while ago I was asked what my biggest gripe with Django was. At the time I couldn’t think of a good answer because since I started using Django in the pre-1.0 days most of the rough edges have been smoothed. Yesterday though, I encountered an error that made me wish I thought of it at the time.

The code that produced the error looked like this:

from django.db import models

class MyModel(model.Model):
    ...

    def save(self):
        models.Model.save(self)

        ...

    ...

The error that was raised was AttributeError: 'NoneType' object has no attribute 'Model'. This means that rather than containing a module object, models was None. Clearly this is impossible as the class could not have been created if that was the case. Impossible or not, it was clearly happening.

Adding a print statement to the module showed that when it was imported the models variable did contain the expected module object. What that also showed was that module was being imported more than once, something that should also be impossible.

After a wild goose chase investigating reasons why the module might be imported twice I tracked it down to the load_app method in django/db/models/loading.py. The code there looks something like this:

    def load_app(self, app_name, can_postpone=False):
        try:
            models = import_module('.models', app_name)
        except ImportError:
            # Ignore exception

Now I’m being a harsh here, and the exception handler does contain a comment about working out if it should reraise the exception. The issue here is that it wasn’t raising the exception, and it’s really not clear why. It turns out that I had a misspelt module name in an import statement in a different module. This raised an ImportError which was caught, hidden and then Django repeatedly attempted to import the models as they were referenced in the models of other apps. The strange exception that was originally encountered is probably an artefact of Python’s garbage collection, although how exactly it occurred is still not clear to me.

There are a number of tickets (#6379, #14130 and probably others) on this topic. A common refrain in Python is that it’s easier to ask for forgiveness than to ask for permission, and I certainly agree with Django and follow that most of the time.

I always follow the rule that try/except clauses should cover as little code as possible. Consider the following piece of code.

try:
    var.method1()

    var.member.method2()
except AttributeError:
    # handle error

Which of the three attribute accesses are we actually trying to catch here? Handling exceptions like this are a useful way of implementing Duck Typing while following the easier to ask forgiveness principle. What this code doesn’t make clear is which member or method is actually optional. A better way to write this would be:

var.method1()

try:
    member = var.member
except AttributeError:
    # handle error
else:
    member.method2()

Now the code is very clear that the var variable may or may not have a member member variable. If method1 or method2 do not exist then the exception is not masked and is passed on. Now lets consider that we want to allow the method1 attribute to be optional.

try:
    var.method1()
except AttributeError:
    # handle error

At first glance it’s obvious that method1 is optional, but actually we’re catching too much here. If there is a bug in method1 that causes an AttributeError to raised then this will be masked and the code will treat it as if method1 didn’t exist. A better piece of code would be:

try:
    method = var.method1
except AttributeError:
    # handle error
else:
    method()

ImportErrors are similar because code can be executed, but then when an error occurs you can’t tell whether the original import failed or whether an import inside that failed. Unlike with an AttributeError there is a no easy way to rewrite the code to only catch the error you’re interested in. Python does provide some tools to divide the import process into steps, so you can tell whether the module exists before attempting to import it. In particular the imp.find_module function would be useful.

Changing Django to avoid catching the wrong ImportErrors will greatly complicate the code. It would also introduce the danger that the algorithm used would not match the one used by Python. So, what’s the moral of this story? Never catch more exceptions than you intended to, and if you get some really odd errors in your Django site watch out for ImportErrors.


Photo of Hidden Cat by Craig Grahford.

Advertisements

Written by Andrew Wilkinson

March 7, 2012 at 1:59 pm

Beating Google With CouchDB, Celery and Whoosh (Part 8)

leave a comment »

github 章魚貼紙In the previous seven posts I’ve gone through all the stages in building a search engine. If you want to try and run it for yourself and tweak it to make it even better then you can. I’ve put the code up on GitHub. All I ask is that if you beat Google, you give me a credit somewhere.

When you’ve downloaded the code it should prove to be quite simple to get running. First you’ll need to edit settings.py. It should work out of the box, but you should change the USER_AGENT setting to something unique. You may also want to adjust some of the other settings, such as the database connection or CouchDB urls.

To set up the CouchDB views type python manage.py update_couchdb.

Next, to run the celery daemon you’ll need to type the following two commands:

python manage.py celeryd -Q retrieve
python manage.py celeryd -Q process

This sets up the daemons to monitor the two queues and process the tasks. As mentioned in a previous post two queues are needed to prevent one set of tasks from swamping the other.

Next you’ll need to run the full text indexer, which can be done with python manage.py index_update and then you’ll want to run the server using python manage.py runserver.

At this point you should have several process running not doing anything. To kick things off we need to inject one or more urls into the system. You can do this with another management command, python manage.py start_crawl http://url. You can run this command as many times as you like to seed your crawler with different pages. It has been my experience that the average page has around 100 links on it so it shouldn’t take long before your crawler is scampering off to crawl many more pages that you initially seeded it with.

So, how well does Celery work with CouchDB as a backend? The answer is that it’s a bit mixed. Certainly it makes it very easy to get started as you can just point it at the server and it just works. However, the drawback, and it’s a real show stopper, is that the Celery daemon will poll the database looking for new tasks. This polling, as you scale up the number of daemons will quickly bring your server to its knees and prevent it from doing any useful work.

The disappointing fact is that Celery could watch the _changes feed rather than polling. Hopefully this will get fixed in a future version. For now though, for anything other experimental scale installations RabbitMQ is a much better bet.

Hopefully this series has been useful to you, and please do download the code and experiment with it!


Photo of github 章魚貼紙 by othree.

Written by Andrew Wilkinson

October 21, 2011 at 12:00 pm

Beating Google With CouchDB, Celery and Whoosh (Part 7)

leave a comment »

The Planet Data CenterThe key ingredients of our search engine are now in place, but we face a problem. We can download webpages and store them in CouchDB. We can rank them in order of importance and query them using Whoosh but the internet is big, really big! A single server doesn’t even come close to being able to hold all the information that you would want it to – Google has an estimated 900,000 servers. So how do we scale this the software we’ve written so far effectively?

The reason I started writing this series was to investigate how well Celery’s integration with CouchDB works. This gives us an immediate win in terms of scaling as we don’t need to worry about a different backend, such as RabbitMQ. Celery itself is designed to scale so we can run celeryd daemons as many boxes as we like and the jobs will be divided amongst them. This means that our indexing and ranking processes will scale easily.

CouchDB is not designed to scale across multiple machines, but there is some mature software, CouchDB-lounge that does just that. I won’t go into how to get set this up but fundamentally you set up a proxy that sits in front of your CouchDB cluster and shards the data across the nodes. It deals with the job of merging view results and managing where the data is actually stored so you don’t have to. O’Reilly’s CouchDB: The Definitive Guide has a chapter on clustering that is well worth a read.

Unfortunately while Woosh is easy to work with it’s not designed to be used on a large scale. Indeed if someone was crazy enough to try to run the software we’ve developed in this series they might be advised to replace Whoosh with Solr. Solr is a lucene-based search server which provides an HTTP interface to the full-text index. Solr comes with a sharding system to enable you to query an index that is too large for a single machine.

So, with our two data storage tools providing HTTP interface and both having replication and sharding either built in or as available as a proxy the chances of being able to scale effectively are good. Celery should allow the background tasks that are needed to run a search engine can be scaled, but the challenges of building and running a large scale infrastructure are many and I would not claim that these tools mean success is guarenteed!

In the final post of this series I will discuss what I’ve learnt about running Celery with CouchDB, and with CouchDB in general. I’ll also describe how to download and run the complete code so you can try these techniques for yourself.

Read part 8.


Photo of The Planet Data Center by The Planet.

Written by Andrew Wilkinson

October 19, 2011 at 12:00 pm

Beating Google With CouchDB, Celery and Whoosh (Part 6)

with one comment

QueryWe’re nearing the end of our plot to create a Google-beating search engine (in my dreams at least) and in this post we’ll build the interface to query the index we’ve built up. Like Google the interface is very simple, just a text box on one page and a list of results on another.

To begin with we just need a page with a query box. To make the page slightly more interesting we’ll also include the number of pages in the index, and a list of the top documents as ordered by our ranking algorithm.

In the templates on this page we reference base.html which provides the boiler plate code needed to make an HTML page.

{% extends "base.html" %}

{% block body %}
    <form action="/search" method="get">
        <input name="q" type="text">
        <input type="submit">
    </form>

    <hr>

    <p>{{ doc_count }} pages in index.</p>

    <hr>

    <h2>Top Pages</h2>

    <ol>
    {% for page in top_docs %}
        <li><a href="{{ page.url }}">{{ page.url }}</a> - {{ page.rank }}</li>
    {% endfor %}
    </ol>
{% endblock %}

To show the number of pages in the index we need to count them. We’ve already created an view to list Pages by their url and CouchDB can return the number of documents in a view without actually returning any of them, so we can just get the count from that. We’ll add the following function to the Page model class.

    @staticmethod
    def count():
        r = settings.db.view("page/by_url", limit=0)
        return r.total_rows

We also need to be able to get a list of the top pages, by rank. We just need to create view that has the rank as the key and CouchDB will sort it for us automatically.

With all the background pieces in place the Django view function to render the index is really very straightforward.

def index(req):
    return render_to_response("index.html", { "doc_count": Page.count(), "top_docs": Page.get_top_by_rank(limit=20) })

Now we get to the meat of the experiment, the search results page. First we need to query the index.

def search(req):
    q = QueryParser("content", schema=schema).parse(req.GET["q"])

This parses the user submitted query and prepares the query ready to be used by Whoosh. Next we need to pass the parsed query to the index.

    results = get_searcher().search(q, limit=100)

Hurrah! Now we have list of results that match our search query. All that remains is to decide what order to display them in. To do this we normalize the score returned by Whoosh and the rank that we calculated, and add them together.

    if len(results) > 0:
        max_score = max([r.score for r in results])
        max_rank = max([r.fields()["rank"] for r in results])

To calculate our combined rank we normalize the score and the rank by setting the largest value of each to one and scaling the rest appropriately.

        combined = []
        for r in results:
            fields = r.fields()
            r.score = r.score/max_score
            r.rank = fields["rank"]/max_rank
            r.combined = r.score + r.rank
            combined.append(r)

The final stage is to sort our list by the combined score and render the results page.

        combined.sort(key=lambda x: x.combined, reverse=True)
    else:
        combined = []

    return render_to_response("results.html", { "q": req.GET["q"], "results": combined })

The template for the results page is below.

{% extends "base.html" %}

{% block body %}
    <form action="/search" method="get">
        <input name="q" type="text" value="{{ q }}">
        <input type="submit">
    </form>

    {% for result in results|slice:":20" %}
        <p>
            <b><a href="{{ result.url }}">{{ result.title|safe }}</a></b> ({{ result.score }}, {{ result.rank }}, {{ result.combined }})<br>
            {{ result.desc|safe }}
        </p>
    {% endfor %}
{% endblock %}

So, there we have it. A complete web crawler, indexer and query website. In the next post I’ll discuss how to scale the search engine.

Read part 7.


Photo of Query by amortize.

Written by Andrew Wilkinson

October 13, 2011 at 12:00 pm

Beating Google With CouchDB, Celery and Whoosh (Part 5)

leave a comment »

orderIn this post we’ll continue building the backend for our search engine by implementing the algorithm we designed in the last post for ranking pages. We’ll also build a index of our pages with Whoosh, a pure-Python full-text indexer and query engine.

To calculate the rank of a page we need to know what other pages link to a given url, and how many links that page has. The code below is a CouchDB map called page/links_to_url. For each page this will output a row for each link on the page with the url linked to as the key and the page’s rank and number of links as the value.

function (doc) {
    if(doc.type == "page") {
        for(i = 0; i < doc.links.length; i++) {
            emit(doc.links[i], [doc.rank, doc.links.length]);
        }
    }
}

As before we’re using a Celery task to allow us to distribute our calculations. When we wrote the find_links task we called calculate_rank with the document id for our page as the parameter.

@task
def calculate_rank(doc_id):
    page = Page.load(settings.db, doc_id)

Next we get a list of ranks for the page’s that link to this page. This static method is a thin wrapper around the page/links_to_url map function given above.

    links = Page.get_links_to_url(page.url)

Now we have the list of ranks we can calculate the rank of this page by dividing the rank of the linking page by the number of links and summing this across all the linking pages.

    rank = 0
    for link in links:
        rank += link[0] / link[1]

To prevent cycles (where A links to B and B links to A) from causing an infinite loop in our calculation we apply a damping factor. This causes the value of the link to decline by 0.85 and combined with the limit later in the function will force any loops to settle on a value.

    old_rank = page.rank
    page.rank = rank * 0.85

If we didn’t find any links to this page then we give it a default rank of 1/number_of_pages.

    if page.rank == 0:
        page.rank = 1.0/settings.db.view("page/by_url", limit=0).total_rows

Finally we compare the new rank to the previous rank in our system. If it has changed by more than 0.0001 then we save the new rank and cause all the pages linked to from our page to recalculate their rank.

    if abs(old_rank - page.rank) > 0.0001:
        page.store(settings.db)

        for link in page.links:
            p = Page.get_id_by_url(link, update=False)
            if p is not None:
                calculate_rank.delay(p)

This is a very simplistic implementation of a page rank algorithm. It does generate a useful ranking of pages, but the number of queued calculate_rank tasks explodes. In a later post I’ll discuss how this could be made rewritten to be more efficient.

Whoosh is a pure-Python full text search engine. In the next post we’ll look at querying it, but first we need to index the pages we’ve crawled.

The first step with Whoosh is to specify your schema. To speed up the display of results we store the information we need to render the results page directly in the schema. For this we need the page title, url and description. We also store the score given to the page by our pagerank-like algorithm. Finally we add the page text to the index so we can query it. If you want more details, the Whoosh documentation is pretty good.

from whoosh.fields import *

schema = Schema(title=TEXT(stored=True), url=ID(stored=True, unique=True), desc=ID(stored=True), rank=NUMERIC(stored=True, type=float), content=TEXT)

CouchDB provides an interface for being informed whenever a document in the database changes. This is perfect for building an index.

Our full-text indexing daemon is implemented as a Django management command so there is some boilerplate code required to make this work.

class Command(BaseCommand):
    def handle(self, **options):
        since = get_last_change()
        writer = get_writer()

CouchDB allows you to get all the changes that have occurred since a specific point in time (using a revision number). We store this number inside the Whoosh index directory, and accessing it using the get_last_change and set_last_change functions. Our access to the Whoosh index is through a IndexWriter object, again accessed through an abstraction function.

Now we enter an infinite loop and call the changes function on our CouchDB database object to get the changes.

        try:
            while True:
                changes = settings.db.changes(since=since)
                since = changes["last_seq"]
                for changeset in changes["results"]:
                    try:
                        doc = settings.db[changeset["id"]]
                    except couchdb.http.ResourceNotFound:
                        continue

In our database we store robots.txt files as well as pages, so we need to ignore them. We also need to parse the document so we can pull out the text from the page. We do this with the BeautifulSoup library.

                    if "type" in doc and doc["type"] == "page":
                        soup = BeautifulSoup(doc["content"])
                        if soup.body is None:
                            continue

On the results page we try to use the meta description if we can find it.

                        desc = soup.findAll('meta', attrs={ "name": desc_re })

Once we’ve got the parsed document we update our Whoosh index. The code is a little complicated because we need to handle the case where the page doesn’t have a title or description, and that we search for the title as well as the body text of the page. The key element here is text=True which pulls out just the text from a node and strips out all of the tags.

                        writer.update_document(
                                title=unicode(soup.title(text=True)[0]) if soup.title is not None and len(soup.title(text=True)) > 0 else doc["url"],
                                url=unicode(doc["url"]),
                                desc=unicode(desc[0]["content"]) if len(desc) > 0 and desc[0]["content"] is not None else u"",
                                rank=doc["rank"],
                                content=unicode(soup.title(text=True)[0] + "\n" + doc["url"] + "\n" + "".join(soup.body(text=True)))
                            )

Finally we update the index and save the last change number so next time the script is run we continue from where we left off.

                    writer.commit()
                    writer = get_writer()

                set_last_change(since)
        finally:
            set_last_change(since)

In the next post I’ll discuss how to query the index, sort the documents by our two rankings and build a simple web interface.

Read part 6.


Photo of order by Steve Mishos.

Written by Andrew Wilkinson

October 11, 2011 at 12:00 pm

Beating Google With CouchDB, Celery and Whoosh (Part 4)

leave a comment »

Red Sofa encounter iIn this series I’m showing you how to build a webcrawler and search engine using standard Python based tools like Django, Celery and Whoosh with a CouchDB backend. In previous posts we created a data structure, parsed and stored robots.txt and stored a single webpage in our document. In this post I’ll show you how to parse out the links from our stored HTML document so we can complete the crawler, and we’ll start calculating the rank for the pages in our database.

There are several different ways of parsing out the links in a given HTML document. You can just use a regular expression to pull the urls out, or you can use a more complete but also more complicated (and slower) method of parsing the HTML using the standard Python htmlparser library, or the wonderful Beautiful Soup. The point of this series isn’t to build a complete webcrawler, but to show you the basic building blocks. So, for simplicity’s sake I’ll use a regular expression.

link_single_re = re.compile(r"<a[^>]+href='([^']+)'")
link_double_re = re.compile(r'<a[^>]+href="([^"]+)"')

All we need to look for an href attribute in an a tag. We’ll use two regular expressions to handle single and double quotes, and then build a list containing all the links in the document.

@task
def find_links(doc_id):
    doc = Page.load(settings.db, doc_id)

    raw_links = []
    for match in link_single_re.finditer(doc.content):
        raw_links.append(match.group(1))

    for match in link_double_re.finditer(doc.content):
        raw_links.append(match.group(1))

Once we’ve got a list of the raw links we need to process them into absolute urls that we can send back to the retrieve_page task we wrote earlier. I’m cutting some corners with processing these urls, in particular I’m not dealing with base tags.

    doc.links = []
    for link in raw_links:
        if link.startswith("#"):
            continue
        elif link.startswith("http://") or link.startswith("https://"):
            pass
        elif link.startswith("/"):
            parse = urlparse(doc["url"])
            link = parse.scheme + "://" + parse.netloc + link
        else:
            link = "/".join(doc["url"].split("/")[:-1]) + "/" + link

        doc.links.append(unescape(link.split("#")[0]))

    doc.store(settings.db)

Once we’ve got our list of links and saved the modified document we then need to trigger the next series of steps to occur. We need to calculate the rank of this page, so we trigger that task and then we step through each page that we linked to. If we’ve already got a copy of the page then we want to recalculate its rank to take into account the rank of this page (more on this later) and if we don’t have a copy then we queue it up to be retrieved.

    calculate_rank.delay(doc.id)

    for link in doc.links:
        p = Page.get_id_by_url(link, update=False)
        if p is not None:
            calculate_rank.delay(p)
        else:
            retrieve_page.delay(link)

We’ve now got a complete webcrawler. We can store webpages and robots.txt files. Given a starting URL our crawler will set about parsing pages to find out what they link to and retrieve those pages as well. Given enough time you’ll end up with most of the internet on your harddisk!

When we come to write the website to query the information we’ve collected we’ll use two numbers to rank pages. First we’ll use the a value that ranks pages base on the query used, but we’ll also use a value that ranks pages based on their importance. This is the same method used by Google, known as Page Rank.

Pank Rank is a measure of how likely you are to end up on a given page by clicking on a random link anywhere on the internet. The Wikipedia article goes into some detail on a number of ways to calculate it, but we’ll use a very simple iterative algorithm.

When created, a page is given a rank equal to 1/number of pages. Each link that is found on a newly crawled page then causes the rank of the destination page to be calculated. In this case the rank of a page is the sum of the ranks of the pages that link to it, divided by the number of links on those pages, multiplied by a dampening factor (I use 0.85, but this could be adjusted.) If a page has a rank of 0.25 and has five links then each page linked to gains 0.05*0.85 rank for that link. If the change in rank of the page when recalculated is significant then the rank of all the pages it links to are recalculated.

In this post we’ve completed the web crawler part of our search engine and discussed how to rank pages in importance. In the next post we’ll implement this ranking and also create a full text index of the pages we have crawled.

Read part 5.


Photo of Red Sofa encounter i by Ricard Gil.

Written by Andrew Wilkinson

October 6, 2011 at 12:00 pm

Beating Google With CouchDB, Celery and Whoosh (Part 3)

with 5 comments

CeleryIn this series I’ll show you how to build a search engine using standard Python tools like Django, Whoosh and CouchDB. In this post we’ll start crawling the web and filling our database with the contents of pages.

One of the rules we set down was to not request a page too often. If, by accident, we try to retrieve a page more than once a week then don’t want that request to actually make it to the internet. To help prevent this we’ll extend the Page class we created in the last post with a function called get_by_url. This static method will take a url and return the Page object that represents it, retrieving the page if we don’t already have a copy. You could create this as an independent function, but I prefer to use static methods to keep things tidy.

We only actually want to retrieve the page from the internet in one of the three tasks the we’re going to create so we’ll give get_by_url a parameter, update that enables us to return None if we don’t have a copy of the page.

    @staticmethod
    def get_by_url(url, update=True):
        r = settings.db.view("page/by_url", key=url)
        if len(r.rows) == 1:
            doc = Page.load(settings.db, r.rows[0].value)
            if doc.is_valid():
                return doc
        elif not update:
            return None
        else:
            doc = Page(url=url)

        doc.update()

        return doc

The key line in the static method is doc.update(). This calls the function to retrieves the page and makes sure we respect the robots.txt file. Let’s look at what happens in that function

    def update(self):
        parse = urlparse(self.url)

We need to split up the given URL so we know whether it’s a secure connection or not, and we need to limit our connects to each domain so we need get that as well. Python has a module, urlparse, that does the hard work for us.

        robotstxt = RobotsTxt.get_by_domain(parse.scheme, parse.netloc)
        if not robotstxt.is_allowed(parse.netloc):
            return False

In the previous post we discussed parsing the robots.txt file and here we make sure that if we’re not allowed to index a page, then we don’t

        while cache.get(parse.netloc) is not None:
            time.sleep(1)
        cache.set(parse.netloc, True, 10)

As with the code to parse robots.txt files we need to make sure we don’t access the same domain too often.

        req = Request(self.url, None, { "User-Agent": settings.USER_AGENT })

        resp = urlopen(req)
        if not resp.info()["Content-Type"].startswith("text/html"):
            return
        self.content = resp.read().decode("utf8")
        self.last_checked = datetime.now()

        self.store(settings.db)

Finally, once we’ve checked we’re allowed to access a page and haven’t accessed another page on the same domain recently we use the standard Python tools to download the content of the page and store it in our database.

Now we can retrieve a page we need to add it to the task processing system. To do this we’ll create a Celery task to retrieve the page. The task just needs to call the get_by_url static method we created earlier and then, if the page is downloaded trigger a second task to parse out all of the links.

@task
def retrieve_page(url):
    page = Page.get_by_url(url)
    if page is None:
        return

    find_links.delay(page.id)

You might be asking why the links aren’t parsed immediately after retrieving the page. They certainly could be, but a key goal was to enable the crawling process to scale as much as possible. Each page crawled has, based on the pages I’ve crawled so far, around 100 links on it. As part of the find_links task a new retrieve_task is created. This quickly swamps the tasks to perform other tasks like calculating the rank of a page and prevents them from being processed.

Celery provides the tools to ensure that a subset of message are processed in a timely manner, called Queues. Tasks can be assigned to different queues and daemons can be made to watch a specific set of queues. If you have a Celery daemon that only watches the queue used by your high priority tasks then those tasks will always be processed quickly.

We’ll use two queues, one for retrieving the pages and another for processing them. First we need to tell Celery about the queues (we also need to include the default celery queue here) and then we create a router class. The router looks at the task name and decides which queue to put it into. Your routing code could be very complicated, but ours is very straightforward.

CELERY_QUEUES = {"retrieve": {"exchange": "default", "exchange_type": "direct", "routing_key": "retrieve"},
                 "process": {"exchange": "default", "exchange_type": "direct", "routing_key": "process "},
                 "celery": {"exchange": "default", "exchange_type": "direct", "routing_key": "celery"}}

class MyRouter(object):
    def route_for_task(self, task, args=None, kwargs=None):
        if task == "crawler.tasks.retrieve_page":
            return { "queue": "retrieve" }
        else:
            return { "queue": "process" }

CELERY_ROUTES = (MyRouter(), )

The final step is to allow the crawler to be kicked off by seeding it with some URLs. I’ve previously posted about how to create a Django management command and they’re a perfect fit here. The command takes one argument, the url, and creates a Celery task to retrieve it.

class Command(BaseCommand):
    def handle(self, url, **options):
         retrieve_page.delay(url)

We’ve now got a web crawler that is almost complete. In the next post I’ll discuss parsing links out of the HTML, and we’ll look at calculating the rank of each page.

Read part 4.


Photo of Celery by tim ellis.

Written by Andrew Wilkinson

October 4, 2011 at 12:00 pm