Ontario Library Association Archives

Teaching Librarian (Toronto, ON: Ontario Library Association, 20030501), Spring 2005, p. 31

The following text may have been generated by Optical Character Recognition, with varying degrees of accuracy. Reader beware!

Unknown uses algorithms, pattern analysis, and sensitive measurements to examine and test content to determine whether it passes or fails a set of criteria. These tests vary, ranging from counting the occurrence of certain trigger words to doing pixel checks for skin tones in images. This method is prone to error as it is usually a computer-driven process with little human checking and judgment of the results. The actual tests done by any one of the acceptability filters are trade secrets as underground web content creators are constantly trying to find ways past these tests. As testing methods get broken, new ones get developed, so you need to update the software periodically. Additionally, acceptability filtering downloads a bigger workload onto the end computer, consuming memory and processing power. The plus side of this is that at least you get to surf all of the net (unlike inclusion whitelists) and even new unrated sites get checked before they get through (unlike exclusion blacklists). Some of the simpler home versions of these filters may use only one filtering method, but the large commercial services subscribed to at the network server level use a combination of all of them… some sites are always allowed, others can be blacklisted, and the balance get tested for acceptability. Such "server-side solutions" are both powerful and flexible. The IT department can manage and deploy the filtering software across an entire network, while still allowing various user logons (teachers vs. students or elementary vs. secondary, for example)--to have distinct levels of access to the Internet. Any update touches all computers instantly. But running a filtering application on the network server itself can create quite a bottleneck in the traffic load. All Internet results for every user, anywhere on the network, must be passed through the filter application before being passed on. During heavy use times of the day Internet response will seem to slow to a crawl… it's not the Internet that's slow… it's the filter application forming a bottleneck where all returning responses are queued, waiting to be allowed in. "Cache-on-demand" service can significantly speed the time it takes to retrieve documents from the Internet, but requires large, fast memory buffers (frequently accessed pages are cached locally the first time they are requested and subsequent requests for those pages are filled from the memory cache, bypassing the filter). Most robust filters require their own server so that regular internal network traffic in your system is not affected. Filtering at the district board level is costly. Either the board needs to house and maintain another network server or license a proxy server solution that resides at a remote location. Either way the filtering companies often charge by student population… and with annual fees running from $2 to $20 per student, you can quickly see how expensive it gets. The Teaching Librarian volume 12, no. 3 31 "ALA's view is that protecting children online is complex, and the solutions demanded are also complex as well as varied. … Filters are not the only solution, nor even the best solution. If you educate children, you are developing an internal filter that is going to remain with them throughout their life." Judith F. Krug, Director of the American Library Association's Office for Intellectual Freedom"

Powered by / Alimenté par VITA Toolkit
Privacy Policy