网络爬虫代写|CS代写|计算机代写

Web Crawler - Project


Description

This assignment is intended to familiarize you with the HTTP protocol. HTTP is (arguably) the most important application level protocol on the Internet today: the Web runs on HTTP, and increasingly other applications use HTTP as well (including Bit torrent, streaming video, Facebook and Twitter’s social APIs, etc.).


Your goal in this assignment is to implement a web crawler that gathers data from a fake social networking website that we have set up for you. There are several educational goals of this project: To expose you to the HTTP protocol, which underlies a large (and growing) number of applications and services today. To let you see how web pages are structured using HTML. To give you experience implementing a client for a well-specified network protocol. To have you understand how web crawlers work, and how they are used to implement popular web services today.


What is a Web Crawler

A web crawler (sometimes known as a robot, a spider, or a scraper) is a piece of software that automatically gathers and traverses documents on the web. For example, let's say you have a crawler and you tell it to start at https://www.wikipedia.com. The software will first download the Wikipedia homepage, then it will parse the HTML and locate all hyperlinks (i.e., anchor tags) embedded in the page. The crawler then downloads all the HTML pages specified by the URLs on the homepage, and parses them looking for more hyperlinks. This process continues until all of the pages on Wikipedia are downloaded and parsed.


Web crawlers are a fundamental component of today’s web. For example, Google bot is Google’s web crawler. Google bot is constantly scouring the web, downloading pages in search of new and updated content. All of this data forms the backbone of Google’s search engine infrastructure.


High-level Requirements
Your goal is to collect 5 secret flags that have been hidden somewhere on the Fakebook website. The flags are unique for each student, and the pages that contain the flags will be different for each student. Since you have no idea what pages the secret flags will appear on, and the Fakebook site is very large (tens of thousands of pages), your only option is to write a web crawler that will traverse Fakebook and locate your flags.

The -s and -p arguments are each optional and they represent the server and port your code should crawl, respectively. If either or both are not provided, you should use proj5.3700.network for the server and 443 for the port. The arguments username and password are used by your crawler to login to Fakebook. You may assume that the root page for Fakebook is available at https://<server>:<port>/fakebook/. You may also assume that the login form for Fakebook is available at https://<server>:<port>/accounts/login/?next=/fakebook/.


Your web crawler should print exactly fives lines of output to STDOUT: the five secret flags discovered during the crawl of Fakebook, each terminated by a \n character. Your web crawler should not print out anything other than those five flags. If your program encounters an unrecoverable error, it may print an error message before terminating.


Secret flags may be hidden on any page on Fakebook, their exact location on each page may be different, and pages may contain multiple flags. Each secret flag is a 64 character long sequences of random alphanumerics.

Sockets, Ports, and TLS

Fakebook uses HTTPS, which means that the full protocol stack is HTTP over TLS over TCP. Thus, in this project, your web crawler will need to connect to Fakebook using a TCP socket wrapped in TLS. Note that in HTTPS, the TCP socket gets wrapped in TLS immediately after connection, before any HTTP protocol messages are sent. This is similar to the TLS version of the simple client you wrote in Project 1, where you needed to implement both a non-secure and secure version of your client.


By convention, HTTP uses TCP port 80 and HTTPS uses port 443. Thus, in this project, you will be connecting to Fakebook on port 443 unless the -p option is specified.

HTTP and Legal Libraries
Part of the challenge of this assignment is that all HTTP request and response code must be written by the student, from scratch. In other words, you need to implement the ability to send HTTP/1.1 messages and parse HTTP responses. Students may use any available libraries to create socket connections, implement TLS, parse URLs, and parse HTML. However, you may not use any libraries/modules/etc. that implement HTTP or manage cookies for you. You may also not use any all-in-one scrapers, such as BeautifulSoup.


For example, if you were to write your crawler in Python, the following modules would all be allowed: socket, urllib.parse, html, html.parser, and xml. However, the following modules would not be allowed: urllib, urllib2, httplib, requests, pycurl, and cookielib.


Similarly, if you were to write your crawler in Java, it would not be legal to use java.net.CookieHandler, java.net.CookieManager, java.net.HttpCookie, java.net.HttpUrlConnection, java.net.URLConnection, URL.openConnection(), URL.openStream(), or URL.getContent().


If students have any questions about the legality of any libraries please post them to Piazza. It is much safer to ask ahead of time, rather than turn in code that uses a questionable library and receive points off for the assignment after the fact.


Implementation Details and Hints

In this assignment, your crawler must implement HTTP/1.1 (not 0.9 or 1.0). This means that there are certain HTTP headers like Host that you must include in your requests (i.e., they are required for all HTTP/1.1 requests). We encourage you to implement Connection: Keep-Alive (i.e., pipelining) to improve your crawler’s performance (and lighten the load on our server), but this is not required, and it is tricky to get correct. We also encourage students to implement Accept-Encoding: gzip (i.e., compressed HTTP responses), since this will also improve performance for everyone, but this is also not required. If you want to get crazy, you can definitely speed up your crawler by using multithreading or multiprocessing, but again this is not required functionality.


One of the key differences between HTTP/1.0 and HTTP/1.1 is that the latter supports chunked encoding. HTTP/1.1 servers may break up large responses into chunks, and it is the client’s responsibility to reconstruct the data by combining the chunks. Our server may return chunked responses, which means your client must be able to reconstruct them. To aid in debugging, you might consider using HTTP/1.0 for your initial implementation; once you have a working 1.0 implementation, you can switch to 1.1 and add support for chunked responses.

In order to build a successful web crawler, you will need to handle several different aspects of the HTTP protocol: HTTP GET - These requests are necessary for downloading HTML pages. HTTP POST - You will need to implement HTTP POST so that your code can login to Fakebook. As shown above, you will pass a username and password to your crawler on the command line. The crawler will then use these values as parameters in an HTTP POST in order to log-in to Fakebook. Cookie Management - Fakebook uses cookies to track whether clients are logged in to the site. If your crawler successfully logs in to Fakebook using an HTTP POST, Fakebook will return a session cookie to your crawler. Your crawler should store this cookie, and submit it along with each HTTP GET request as it crawls Fakebook. If your crawler fails to handle cookies properly, then your software will not be able to successfully crawl Fakebook.


In addition to crawling Fakebook, your web crawler must be able to correctly handle HTTP status codes. Obviously, you need to handle 200 - OK, since that means everything is okay. Your code must also handle: 302 - Found: This is as an HTTP redirect. Your crawler should try the request again using the new URL given by the server in the Location header. 403 - Forbidden and 404 - Not Found: Our web server may return these codes in order to trip up your crawler. In this case, your crawler should abandon the URL that generated the error code. 503 - Service Unavailable: Our web server may randomly return this error code to your crawler. In this case, your crawler should re-try the request for the URL until the request is successful.


咨询 Alpha 小助手,获取更多课业帮助。