Category Archives: Internet

How should we tackle “extreme” comments posted online?

The European Court of Human Rights, Strasbourg (image c/o James Russell on Flickr).

A recent ruling by the European Court of Human Rights (ECHR) could have ramifications for all of those with websites enabling comments to be posted by readers. The Court ruled that an Estonian news site (Delfi) may be held responsible for anonymous comments that are allegedly defamatory. A representative of digital rights organisation Access argued that the judgement has:

“…dramatically shifted the internet away from the free expression and privacy protections that created the internet as we know it.”

A post by the Media Legal Defence Initiative listed the main reasons why the court came to this decision, which included:

  1. the “extreme” nature of the comments which the court considered to amount to hate speech
  2. the fact that they were published on a professionally-run and commercial news website
  3. the insufficient measures taken by Delfi to weed out the comments in question and the low likelihood of a prosecution of the users who posted the comments.

The full judgement can be read here.

Who is responsible for comments posted online?

The timing of this is particularly relevant for me following the coverage of a tragic local incident. Following an attempted suicide by a local woman that led to the death of a man attempting to rescue her, a local news website reported the incident in relative detail, including statements from witnesses (although withholding, at the time, the names of the individuals involved). Sadly this led to a number of insensitive and inappropriate comments being posted about the woman who tried to take her own life. Upon approaching the publishers to request the closing of comments for such a story, I was told that I should report individual inappropriate comments rather than expect them to remove the comments thread altogether.

These two stories raise a number of interesting issues. Who is ultimately liable for content that is published online? Is it the responsibility of the host website to deal with “extreme comments”? Is it the responsibility of the individual who posts the comments? Should there even be any restrictions on what people post online? Should we just accept that everyone has a right to free expression online and that hurtful comments are just manifestations of free expression?

What is your view?

If you’ve got a perspective on the judgement by the ECHR, who should ultimately be responsible for comments posted online or whether any limits in this area are an unreasonable limitation of free expression and would like to write about the issues for Informed, we’d like to hear from you. Articles should be 800-1000 words (although this is flexible) and our normal moderation process applies. If you are interested in writing for Informed, please contact us via submissions[at]theinformed.org.uk.

If you require any support, The Samaritans are available 24hrs a day, 365 days a week to provide support.

Ian Clark
The Informed Team

Net neutrality – what is it and why should we be concerned about it?

(Image c/o Maik on Flickr.)

What is net neutrality?

Net neutrality is the principle that all packets of data over the internet should be transmitted equally, without discrimination. So, for example, net neutrality ensures that your blog can be accessed just as quickly as, say the BBC website. Essentially, it prevents ISPs from discriminating between sites, organisations etc whereby those with the deepest pockets can pay to get in the fast lane, whilst the rest have to contend with the slow lane. Instead, every website is treated equally, preventing the big names from delivering their data faster than a small independent online service. This ensures that no one organisation can deliver their data any quicker than anyone else, enabling a fair and open playing field that encourages innovation and diversity in the range of information material online. The principles of net neutrality are effectively the reason why we have a (reasonably) diverse online space that enables anyone to create a website and reach a large volume of people.

Isn’t this mainly a US issue?

The issue has been a major topic for debate in the United States for sometime now. In theory, this was recently resolved when the Federal Communication Commission (FCC) recently voted to protect the principle of net neutrality. However, this has not closed the debate as some US broadband providers have launched a legal challenge against this ruling and Republicans in Congress have launched an attempt to fast-track a repeal of the FCC’s new rules.

Why should we in Europe be concerned if this is a US issue?

Whilst there has been little public debate in the UK or Europe around the issue of net neutrality, it is becoming an increasingly important issue. Earlier this year, the Latvian government (currently holding the European presidency) proposed that there should be exceptions to net neutrality rules, particularly when their networks face “exceptional…congestion”.

In March, a majority of EU Member States voted in favour of changing the rules to bar discrimination in internet access but, crucially, the rule changes would allow the prioritisation of some “specialised” services that required high quality internet access to function. This was reinforced by the Chief Executive of Nokia who argued that some technologies (such as self-driving cars) will be hindered so long as providers have to abide by net neutrality principles.

The current situation in the EU makes an interesting comparison to the FCC ruling, as it has been argued that the EU is heading in exactly the opposite direction to the FCCs strong position on net neutrality. It’s unclear at this stage what impact the FCC ruling will have on the EU’s position. The difficulty in the EU is that the legislative process is more complex in the US, due partly to the number of countries and bodies involved. Furthermore, because there are many countries and many telecoms CEOs, there is much stronger lobbying against the legislation.

A recent report by Web Index found a mixed bag when it comes to net neutrality regulations across the EU. The report noted that whilst the Netherlands scored eight out of a possible ten for net neutrality, countries such as Italy and Poland scored only 2. In a blog post for the European Commission, Tim Berners Lee argued that binding net neutrality rules would “raise the bar for the performance of lower ranking countries, ultimately enabling Europe to harvest the full potential of the open Internet as a driver for economic growth and social progress”.

Will regulation solve the problem?

Whilst tighter regulation can help to oblige telecoms companies to adhere to the principles of net neutrality, it doesn’t mean to say that the problem will be eliminated. As with all laws, their existence does not eradicate an issue, it merely minimises it. For example, the Authority for Consumers and Markets in the Netherlands recently fined the country’s two largest operators, KPN and Vodafone, for blocking services and zero-rating data for subscribers to HBO. It’s clear that violations will continue to occur, but arguably there will be fewer once regulation is in place.

Who opposes net neutrality?

A range of large companies oppose net neutrality, including: Nokia (see above), Panasonic, Ericsson, IBM and CISCO amongst others.

Who supports net neutrality?

Article 19, Greenpeace, Twitter,  Microsoft (although Microsoft argue that “traffic should not be subject to unreasonable discrimination by their broadband provider” – it’s unclear what they mean by “unreasonable”), Etsy, Amazon, Facebook and, of course, the founder of the World Wide Web, Tim Berners-Lee.

What about Google?

Google have been largely quiet publicly when it comes to the net neutrality debate in recent years, although they had previously been very vocal on the issue and have lobbied the FCC in the past.

Why should I care about net neutrality?

Net neutrality ensures that we have an internet that enables the broadest possible range of views. By ensuring a level playing field, it ensures that no one perspective dominates the internet. If companies are able to ensure their data travels on the fast lane, then we can be sure that those companies will dominate the landscape because their sites transfer data quickly and efficiently. This will ultimately lead to a narrowing down of sites as people avoid using services where data travels in the slow lane, in favour of those that travel in the fast lane. Big companies will get bigger, small companies will disappear and new companies will not get off the ground without significant sums of money to enable them to compete. The internet thrives on innovation and an abandonment of these principles would seriously impede innovation.

We have also seen in other forms of media what occurs when regulation is too lax. We see in print and broadcast media a decline in media plurality. Certain media outlets have come to dominate the landscape with ownership of popular print and broadcast media. An abandonment of net neutrality rules could lead to the very same decline online. The internet will be dominated by a very few large corporations who provide the vast majority of the content. This is, of course, bad news for those that use the internet and bad news for democracy as a vibrant democracy relies on media plurality to ensure a well-informed electorate.

Where can I find out more about net neutrality?

The digital rights campaigning organisation Open Rights Group keeps a close eye on developments and often posts updates regarding developments on net neutrality in the UK. Article 19 is also a good source of information regarding the issue. As is Index on Censorship. A number of organisations (including Article 19 and Index on Censorship) are also members of the Global Net Neutrality Coalition – you can find details of all involved on their website. Web Index, produced by the World Wide Web Foundation, measures the World Wide Web’s “contribution to social, economic and political progress in countries across the world” and produces an annual report that has recently added net neutrality to the list of measures it assesses. American readers can also defend the principles of net neutrality through the Battle for the Net campaign

If you would like to write for Informed, about net neutrality, the internet or any issue related to the information sector, please get in touch with your ideas via our contact page here.

Should UK universities block access to parts of the web?

The following post was submitted by Daniel Payne.

Image c/o Gerardofegan on Flickr.

Either a subset of the internet – or no internet – is ever accessible to any individual. We are never using the Internet, if that even exists. This is due to a variety of positive and negative mechanisms which include the state, the law, the self, whether you actually have internet access at all, internet service providers, friends, teachers, financial situation, cultural reasons, and your mum.

It might come as a surprise to learn that universities and other higher education institutions throughout the UK choose to block categories of the internet beyond what is required of them by law, from sex and abortion, to naturism, online greeting cards, and marijuana. This is often referred to as “content-filtering” by the companies who perform the blocking, since this sounds less bad.

As information professionals working in the libraries of these institutions, should we care that we are working in an environment which automatically excludes whole categories of the internet? Why does a university pay money to do this, and who decides which categories to block and why?

There are of course parts of the internet which are blocked before the university steps in. The Internet Watch Foundation (IWF) maintains a constantly changing list called the Child Abuse Image Content list (CAIC). Companies which give us access to the internet subscribe to this list and block these parts of web. There are also websites blocked by order of a court. These are usually file sharing sites where major infringement of intellectual property occurs. Try accessing: http://www.thepiratebay.se.

In addition to that which is legally required, many universities license third-party content filtering software such as BrightCloud, Websense, Smoothwall, Bloxx, and Fortiguard [1]. In response to a request for a webpage, the software will either allow or block access depending on which categories the university has selected (and as is the case in some universities, the profile of the individual requesting it).

So what categories are universities choosing to block? Under the Freedom of Information Act, I contacted universities to find out whether any blocking on  their networks occurred, and if so, what categories they blocked. Where universities claimed an exemption to disclose a list of URLs due to perceived security implications, subsequent requests were made to ascertain the “categories” by which websites were blocked (i.e. pornography).

Here is the good news: of the 119 higher education institutions I received a response from, 63% confirmed they did not carry out internet blocking [2]. Indeed, some institutions such as Imperial College, pointed out that blocking parts of the internet would be against the principles of academic freedom.

Here is (some) of the bad news. A  full list of responses is available on figshare [3]:

  • 10% refused to confirm or deny that they did or didn’t block parts of the internet.
  • Trinity Laban Conservatoire of Music and Dance blocks the category “abortion” for junior users.
  • In addition to “adult”, Queen’s University Belfast also blocks “naturism”.
  • University of Aberdeen and Nottingham Trent University block “marijuana”.
  • There are a whole host of vague categories such as “questionable”, “tasteless”, “extreme politics”, “violence”, “unethical”, and “intolerance”.

Universities who carry out the category-based blocking described above are keen to point out that they have mechanisms in place where an individual can request that a block is lifted. However, this can often involve seeking permission from the head of department, or submitting an evidence form which justifies your need to access that material;  processes which will never be immediate and could be humiliating. Should an adult have to get permission to access porn? Are the number of adult individuals in UK universities getting off on porn on library computers in full view of everyone else endemic enough to warrant this?  What about a 15 year old looking up abortion?

The 10% which refused to provide any information at all generally did so by claiming an exemption under section 31(1)(a) of the Act, which permits public bodies to withhold information in the interests of the prevention and detection of crime. My only comment on this would be how surprising it is that 63% of universities didn’t think this.

Universities and these content-filtering companies cannot or will not release very detailed information on these categories, since doing so would provide  information for the individuals or organisations behind those URLs to attempt to circumvent their designated classification. We therefore don’t really know much about how companies decide which webpages are “unethical”, or “questionable”.

Universities and their libraries are about creating, disseminating, questioning, and archiving information. The biggest possible subset of the Internet out there in the wild should not be reduced any further by universities according to an arbitrary, “undesirable” set of categories, but offered alongside digital literacy skills which empower students to judge information for themselves, not make judgements on there behalf.

 

[1] Some universities freely volunteered the name of their content filtering software. Some, when requested, disclosed this information. Others specifically refused due to “commercial” interest reasons. The list of content-filtering companies here have all been mentioned by at least one university.

[2] Where a university responded by stating that it only blocked malware/spam sites, this was counted as a “no blocking” response.

[3]Payne, Daniel (2014): Categories of websites blocked by UK universities. figshare.
http://dx.doi.org/10.6084/m9.figshare.1106875 Retrieved 17:12, Jul 23, 2014 (GMT)

 

This post represents the opinions and thoughts of the author alone. Any information obtained is believed to be accurate. If you believe there are errors, please get in touch.

Should access to the internet be a fundamental right…for everyone?

(Image c/o gianni on Flickr)

Overcoming the divide between the richest and the poorest in society has always been a significant challenge. The wealthiest in society have always been in a position to afford the services required to improve their quality of life: better healthcare, better education etc etc. In the twentieth century, particularly post-1945, there were renewed efforts to address this disparity through the introduction of the National Health Service, a functioning welfare system and free secondary education for all pupils.

Between 1910-1979, the divide between the wealthiest and the poorest in the UK dropped significantly, particularly after 1936. Since that period, however, the trend has been in the opposite direction as the wealthiest take a larger share of income than at any point since 1940. This widening of the divide between the richest and the poorest is, in part, a symptom of the watering down of the post-1945 social contract, characterised by a move away from the primacy of society towards the primacy of the individual. Technological advances have, however, provided an opportunity to close this gap once more.

However, as yet, this potential has yet to be realised, not least due to the expense of the technology and the skills required to exploit it. Indeed, whilst the impact of easier access by the public to relevant information has been felt to a degree, the continued existence of a digital divide hampers progress towards the more equitable society the technology can help to deliver.

At present, there are around 7 million people in the UK who have never accessed the internet (the number without access is obviously higher). The divide presents a number of difficulties for those without access to the internet. For example, it can hamper their child’s performance at school. It can put them at a disadvantage when it comes to their health and, as preventative care pushes up the agenda, the implications for the unconnected are stark. It can affect them economically, both in terms of the savings they would make and as a consequence of welfare reforms by the UK coalition in pushing social security online. Closing this divide can, therefore, improve life chances and help to shrink the gap between the richest and the poorest (it obviously won’t eliminate the gap on its own, that would require more wide-ranging action).

As government services have shifted online, the commercial potential for ever faster broadband has begun to be realised and the economic benefits of getting everyone online are talked up, there has been an awareness of the importance of addressing the divide between those connected to the internet and those that are not. However, one group is often excluded when it comes to identifying and supporting the so-called ‘information poor’ – prisoners.

Towards the end of last year, the Prison Reform Trust and Prisoners’ Education Trust released a report on computer and internet access in prisons. Through the Gateway: How Computers Can Transform Rehabilitation [PDF 1.46MB] explores the use of information and communication technology (ICT) in prisons and its potential impact on rehabilitation. Based on a survey of prisons sent to all prison governors and directors in England and Wales supported by the National Offender Management Service (NOMS), a focus group of prisoners’ families, prison visits and expert roundtables, the report argues that drastic change is needed and access to ICT should be reconsidered.

(Image c/o Marc Soller via Flickr.)

Now, some might argue that if you are in prison you lose your liberty and therefore any right to access services such as the internet. However, whether we like it or not, many prisoners are only removed from society on a temporary basis, they will have to be reintegrated at some point. As such, we need to consider their return to society, their re-integration and, of course, provide the necessary support to help ensure that they do not re-offend. As the Prison Trust underline in their coverage of the report on their website, nearly half of all prisoners (47%) are reconvicted within a year of their release. Furthermore, in 2011-12, “just 27% of prisoners entered employment on release from prison”. The challenge for us as a society is to reduce the re-offending rate and ensure that prisoners are not pushed to the edges of society once they have finished serving their sentence.

Changes to the welfare system are in danger of making integration increasingly difficult for those released from prison. With the government pushing job seekers online to find work or suffer associated penalties, it is more crucial than ever that prisoners are not left behind and therefore placed at a serious disadvantage when it comes to finding work. The scale of the problem is reinforced in the report:

47% of prisoners say they have no qualifications. This compares to 15% of the working age general population in the UK.

21% of prisoners reported needing help with reading and writing or ability with numbers.

With such a lack of skills, it is clear that significant support is needed in getting prisoners online, preparing them for work outside of prison and ensuring they are not left behind or penalised by the government’s new social security regime. When almost a half of prisoners have no qualifications whatsoever and 1 in 5 need help with reading and writing, there are clearly significant barriers ahead in terms of their re-integration into society. As two prisoners noted in the report:

“Here’s why you need internet for resettlement: to keep up with changes outside – job criteria can change while you’re inside; checking on housing by particular postcodes – co-ordinated with your conditions of release.”

“It’s a bit of a risk – being linked into the internet – but the bigger risk is sending people out who are not able to cope and who cannot find gainful employment.”

The provision of internet access to prisoners can not only help develop their skills and ensure they are not left behind after they have served their sentence, it can also help to further their education. The growth of Massive Open Online Learning Courses (MOOCs) provides the opportunity for opening up education for all free of charge (provided they are online of course). Why should those with the skills to utilise the internet be prevented from furthering their education and helping to increase their chances of employment after their release? If we are to be serious about reducing re-offending rates, then shouldn’t we be looking at all the options and see an internet connection not as a luxury, but an important tool in helping to ensure prisoners can be re-integrated after serving their time?

Prison libraries could play a key role in ensuring access is provided and the technical skills of prisoners are developed. However, they are hampered by a number of restrictions placed upon them. Librarians working in prisons are severely restricted as a result of their equipment being connected to a tightly controlled prison network. Many sites are blocked, including blogs, social media and sometimes government websites. Although such restrictions are in place, prisoners are still not permitted to use the library computer because it is connected to the internet. Instead, prisoners are only provided access to standalone computers that are not connected to the internet and only permit the user to play games or write legal letters. That is, of course, if their prison is lucky to have any computers at all.

Whilst there may be legitimate concerns about the kind of material certain prisoners may attempt to access, such restrictions are not helpful in trying to ensure their reintegration into society when their sentence is served. The opportunities available online to learn new skills, not to mention the opportunity to learn basic ICT skills with the help of a trained prison librarian, can play a significant role in reducing the re-offending rate and provide former prisoners with the opportunity to make a more positive contribution to society. As Nick Hardwick, HM Chief Inspector of Prisons, notes in the report’s foreword:

“We can’t go on with prisons in a pre-internet dark age: inefficient, wasteful and leaving prisoners woefully unprepared for the real world they will face on release. I have not met one prison professional who does not think drastic change is needed.”

If we want to reduce reoffending and ensure a more equitable society, then we need to address the digital divide that exists not only across our communities, but between our communities and those that have been excluded from them. It will prove controversial with many, but as the world rapidly changes around us, we need to ensure that those excluded can be reintegrated into a world that can be very different from the one they were excluded from.

A response to ‘Web filtering and the dangerous impact on users’

This blog post was written in response to a previous Informed post on ‘Web filtering and the dangerous impact on users‘.

The article on web filtering deals with two considerable issues. That of filtering within an organisational context, and that of the state imposing such filtering on all users. As to the government proposals, there can be little argument there, state imposed filtering is indefensible – it should be down to the administrators of each network (including home networks) as to how they implement filtering, if at all. But while it’s perfectly valid to highlight the problems encountered by users when browsing on filtered networks, this doesn’t mean that such systems aren’t necessary.

The reason given in the article that ‘many schools and universities will already have similar filters put in place to “protect” their students’, is not quite telling the full story as to why filtering is in place; one reason for having some kind of filtering is to block threats on the internal network. But what are these threats to the network that need additional security measures, and are they really so serious as to choose a universally unpopular solution? Well, to name a few: worms, trojans and spyware are not particularly rare online. These all tend to come under the general term ‘malware’, more info on which can be found easily (perhaps unless you’re on a filtered network) e.g. http://www.microsoft.com/en-gb/security/resources/malware-whatis.aspx

There are other ways of protecting a network from these risks. Firewalls, anti-virus, pop-up blockers, and more. But it is rare that an organisation will ask themselves which SINGLE one of these they should use; they will ask themselves which vendor they should use for EACH of them. A council/library/university network will have all these things installed; far more security than the majority of people have in their home environment. At home people tend to believe they rely on Internet access, but a few days (or even weeks) without the web or your computer doesn’t necessarily mean a disaster. An organisation without computer systems for a few days would often mean no work done on those days. I’ve been at work in a public sector organisation where the place effectively shut down while a suspected online security breach was being investigated. Internet connectivity turned off, all systems inaccessible (social care systems etc.) Luckily, in that case it turned out to be a false alarm, but if malware had been found then the consequences would have been far worse. It’s not just loss of productivity, it’s the potential loss of security to the sensitive data (in Public Sector organisations that can mean a lot of personal data) held on the network, and the systems used within that network.

One response to this is ‘well, I get on fine at home without filtering and can be sensible online, why should work consider I need filtered access?’ The evidence is unfortunately that as a society we don’t seem to get on fine at home. There are many varying estimates of the number of computers infected with malware (and we need to be cautious in order to make allowance for bias in reports from internet security companies), but as an example take the following news story from the BBC about the DNSChanger virus, estimated to have affected 4 million users:
http://www.bbc.co.uk/news/technology-18735228

Far from the idea that these are ‘blocks imposed without enough thought about how they will impact on users’, it is likely that the blocks are more often dismissed by users without enough thought as to why they are put in place.

Take short URLs. To a user, the idea of blocking short URLs is nothing other than unnecessary and thoughtless censorship. However, short URLs have long been considered an online security risk, the following post gives a good indication of why using them in emails will be very likely to get your emails relegated to spam boxes.
http://blog.wordtothewise.com/2011/06/bitly-gets-you-blocked/

Also, consider consistency, users will expect sites to always either be blocked or not blocked, not to change daily. The previous article mentioned that the ‘blocks imposed are also highly inconsistent, with page categorisations changing by the day.’ The filters are generally very consistent, because they are relatively simple. The lack of consistency is from the websites, not the filter, but very little can be done about that. Even a perfect filtering system would return varying results each day as the pages it was assessing changed daily. And this is certainly more preferable than maintaining any kind of set categorisation for each site. To take an extreme example, a site can be safe one day, and hacked the next day into something not safe. But that information is never given to users, the ever changing categorisations are just perceived as a faulty system.

The question was also raised of confidence being dented for those with limited computing skills who are confronted with block screens. However I would venture that this is going to be less than the dented confidence caused by phishing attacks (http://www.microsoft.com/en-gb/security/resources/phishing-whatis.aspx), such as those to steal personal and bank details. These attacks target inexperienced Internet users who aren’t confident online, and don’t necessarily imagine that a page that looks like their bank may not be. Again, the answer seems to be to provide more information when blocking sites; a mysterious block message is certainly alienating, though potentially better than exposing the users to the threat. But those block messages are fully customisable, why not a well written justification as to why the organisation chooses to block that classification of site?

Organisations could certainly do a better job of explaining the decisions they make. Perhaps more information would go some way to tackling the idea that ‘in reality the filters are not effectively protecting anyone’. Instead of http://longurl.org/ being thought of as just a way to get round an annoying filtering system, users could be using it through choice, both at home and at work. It’s an excellent tool to ensure you’re not taken in by a short URL which turns out to be something you weren’t suspecting.

Ultimately, until those complaining about Internet filtering are able to put forward alternative security assessments and plans for those organisations, then filtering will still be chosen by default as the best option. That seems overly dismissive of users in many ways, the fact that very few people are qualified to produce a network security risk analysis doesn’t mean that we’re not entitled to complain, or to have a reasonably informed opinion, and be concerned over censorship. But it does mean that we need to appreciate that only highlighting the problems is no more effective than complaining about the weather. So what can actually be done about it? Here are a few possibilities:

  • Turn off all web filtering apart from that which is classified as a security riskOf course I’ve focussed only on security filtering, such as sites classified as linking to malware. You may think, ‘well fair enough, block all those things, but the point was about actually useful sites that we need to go on’, and that’s fair enough. But with this option you will still be left with the filtering software turned on, and it will still occasionally get things wrong, and a cause of irritation and loss of productivity will continue. Organisations can clearly do better to alleviate this though by involving users in the decisions, and providing their own arguments.
  • Separate networksWhen you add Internet access to a private network you are effectively merging that network onto the Internet, and then desperately trying to bolt in security to ensure that your network remains private. An alternative is not to connect the network to the Internet. That doesn’t mean not allowing users online, but there are alternatives – such as using a separate network dedicated to online access, while keeping the actual network locked down. Users would probably have to say goodbye to anything like downloading a file onto their PC though, and it may make certain tasks far less practical.
  • Better filtering softwareThis is the ideal, but very difficult. There are better methods of filtering than current providers often use, but the issue is always what can be done quickly in the time available. Intelligent analysis of a web page is not going to be easy to do while the user is waiting for the page to load. Current filters are simplistic but fast.

It’s worth remembering that for every anecdote about filtering mistakes, such as the example where the British Library banned Hamlet (http://blog.inkyfool.com/2013/08/hamlet-is-banned.html), network administrators also have a number of equally worrying stories: the times users have made complaints that they weren’t allowed to download a file that turned out to be a virus, or the times that users have requested access to sites that have been hacked and marked as a security risk. It may be that while working to provide more information about why some filtering is in place, and removing unnecessary filtering options, we can then move towards a situation where the filters provide a real and easily identifiable service to the user, not a hindrance.

By Public Sector Systems

Web filtering and the dangerous impact on users

Web filters impose highly inconsistent blocks.
(Image c/o mayhem on Flickr.)

The following is a comment piece by a contributor who has asked to remain anonymous, on how suggested introduction of web filtering software can and will impact upon practice.

In July 2013, David Cameron announced that he wanted filters against online porn turned on by default on the internet connections of all UK households. The impact of this proposal has been analysed in depth by many experts, all with more expertise than me, so I’m not going to rehash their explanations. What I am going to do is look at how these controls might work in practice.

I work as an Information Officer in a public body. In this role, I may be asked to research any topic which is either of current interest to my employer, or which is likely to become so in future. My employer is also obliged due to its position to impose a rather more draconian control on online resources than I have been used to in previous employment in the private and higher education sectors. The filter being used in my workplace is also highly inconsistent in its blocking actions, as the proposed filters are predicted to be.

Here are some of the most regularly encountered examples of sites and materials which are blocked in my workplace:

  • Social media such as Twitter, Facebook, Pinterest (but Twitter can be accessed via TweetDeck)
  • Photo hosting sites linked to Twitter (Twitpic, Instagram etc)
  • Weblogs (but only those from certain providers: Blogger is blocked but WordPress is fine)
  • Videos (certain ones: Youtube is blocked, but not Vimeo)
  • Audio files such as podcasts (BBC etc)
  • Presentation sites (Slideshare is blocked, but not Prezi)
  • MOOCs such as Coursera
  • Link shorteners (is.gd, bit.ly etc)
  • Microsoft help pages (don’t ask me why – attempting to access these actually gives a message saying “Your internet access has been revoked”!)

Certain sites can be accessed (in a wonky/stripped format), but due to the filter blocking some images the actual login button doesn’t exist, or an essential action button, which effectively adds them to an unofficial block list. The filter often doesn’t say that a page is blocked, it just gives a “this web page is not available” message, and when the “More” button is clicked, the options given are “This web page might be temporarily down or it may have moved permanently to a new web address”. When first encountered this message leads to confusion, and leads users to spending time checking with other people both internally and external to the company to see if they can access the page, or if there are network problems preventing internet access.

In my daily work I create internal briefings of relevant professional news, which are required in order to keep users informed about important developments in their specific work areas. Due to the filter, these briefings can only refer to text materials: any information delivered by non-textual methods cannot be accessed, and Twitter accounts which may be providing relevant information are blocked. With the shift by many bodies and companies away from providing RSS feeds, and towards using Twitter as an official information source, large amounts of information are becoming inaccessible to the users of my service.

Nonetheless, I try and monitor sources providing information via Twitter, and I currently manage to avoid the Twitter block by accessing it via a Tweetdeck extension on the Chrome browser. Once on TweetDeck, I skim for relevant information. If my contacts provide information via a link, Twitter/TweetDeck automatically shortens it…but of course, link shorteners are blocked by the filter. To view a link provided via Twitter, I must:

  • Click on the link provided on Twitter
  • Get an Access Blocked page
  • Copy the link displayed on the block page
  • Open longurl.org
  • Paste the link in, and hit submit
  • Click on the lengthened link displayed, to visit the page

This is not an efficient or reasonable way of working to source information, but it’s currently the only option possible for me within this filtering environment.

The blocks imposed are also highly inconsistent, with page categorisations changing by the day. This means that I can never be sure that a resource I access one day will be accessible the next day, or vice versa.

So, that’s what it’s like trying to work and provide an effective information service within a heavily filtered environment. It’s a struggle, with lots of time wasted trying to circumvent blocks that are imposed without enough thought about how they will impact on users. My ability to help my users is hampered, in a myriad of small but nonetheless important and time-consuming ways. My users are being blocked from accessing information sources, whether they realise it or not, and they may not have either the time or the ability to circumvent these blocks. Each individual user is the best judge of what materials they may need to inform their work, not an automated filtering system, but if those individuals can’t see the full range of information available, how can they decide whether it’s useful or not?

Many schools and universities will already have similar filters put in place to “protect” their students. As you can see though, in reality the filters are not effectively protecting anyone. Instead, what they actually do is make reliable and regular access to information and information sources complicated, or even impossible. It means that those who have to work within the current filtering system actually need more support for their online needs than those working outside it, to assist them in finding ways of accessing the resources they need, in a way which the filtering software deems acceptable. A schoolchild trying to use the internet for homework research will be blocked from accessing or using relevant resources, and the development of their knowledge and understanding may suffer. Adults without advanced internet skills will be confronted with “alert” block screens for what they may have felt to be innocuous search terms: for those who have limited computer skills, this can be a serious blow to their confidence online, and discourage them from using online resources in future.

Sometimes however, there’s no method or tool to use to get around the filtering limits. Some sites are just…unavailable. Entirely. This is online censorship, and in this context, it is state-imposed censorship. I can think of a few countries in which state imposed censorship is the default position, and they are not countries where the population could be considered to be well informed, or fully engaged in the political process. In light of a developing belief that the right to access the internet without unreasonable restriction is now a core human right, any move to restrict that access in any way is a massive backwards step for any government.