Many thanks to Nik Williams of Scottish PEN for the following article on the Investigatory Powers Bill.
So there we have it. After a year of discussion and debate, the 1000+ pages of documents outlining the role of surveillance in a modern democracy has passed through both Houses of Parliament. After a bloated few weeks, with discussion monopolised by an ill-placed amendment on press regulation, the Investigatory Powers Bill will soon be an act of parliament. Here at Scottish PEN this occasion can only be met with resignation and deeply held reservations.
The nature of the closing weeks’ discussion in both houses should depress even the chambers’ most ardent supporters. With Baroness Hollins’ proposed amendment to extend exemplary damages to victims of phone hacking from newspapers not signed up to an approved regulator, the debate drifted away from the surveillance powers in the bill that will distinguish the UK from every established democracy in the world, towards a rehash of a discussion that has been left unfinished following the Leveson enquiry in 2011/12.
This did the bill and our civil liberties a disservice. When was the last time we heard the MPs and Peers use the words ‘bulk’, ‘communications data’, ‘request filter’, ‘interception’ or ‘civil liberties’? While phone hacking and press regulation commandeered space reserved for surveillance powers, these issues were ignored, scrutiny was frozen and forsaken and consensus across the house was assumed.
So now we are left with powers that enable our web records to be stored by public bodies on every British citizen for 12 months; the capacity of intelligence agencies to hack and potentially destroy devices, systems or networks; powers that collect data on the many to find the few and obligations that can be foisted on technology companies to undermine encryption. This is a crude summary of the powers – the sheer scale and the impact of the bill will only be fully realised when the bill is enacted.
So what do we do now? We mobilise, we secure, we seek to frustrate those who watch over us, we get smart. Interrogating what platforms we use and their privacy agreements are not luxuries afforded to the serial paranoiacs or techies alone, they are the actions we all need to take – they represent the markers on a roadmap we must all use to navigate our way through a narrowing and treacherous landscape.
These are obligations that fall to all of us; whether we write, research, communicate or shop online, whether we offer digital services to others, we all need to position privacy at the heart of our thinking, not as a peripheral second-thought. This is never truer than the situation public, academic and specialist libraries now find themselves in. Crudely defined as a telecommunication provider, as the IP Bill lacks any lower threshold to who can be obliged to store data and other requests from the state, the already precarious existence of libraries in the UK is further placed in jeopardy. But can libraries, seen by many as a refuge or sanctuary, be places that invite surveillance and consolidate our private information?
Following a pilot workshop at Glasgow Women’s Library in July, Scottish PEN is rolling out a series of workshops in Edinburgh, Orkney and Perth to build the capacity of libraries across these regions to protect the digital security and privacy of both their institutions and patrons. With libraries operating for many as the portal to the online word to facilitate communication, research, shopping and applying for jobs or benefits, how libraries can continue to offer these services in good faith in light of these new obligations is something we need to address now.
We do not believe in the principle that the collection of private data of innocent citizens will guarantee our safety or security (a belief mirrored by the intelligence agencies who fear, according to a confidential M15 report, that collecting too much data “creates a real risk of ‘intelligence failure’ i.e. from the Service being unable to access potentially life-saving intelligence from data that it has already collected”). But it appears that we all, including the intelligence agencies, need to strap in and assume nothing is sacred, nothing is beyond the reach of the voraciously hungry state.
But we need not be resigned to this fate. We need to know these powers inside and out, what they cover, what they don’t, and what they may enable through vague wording and overly broad interpretations. We need to listen to those who have things to say about encryption, threat modelling and zero-knowledge systems, and perhaps most importantly, we need to feel confident to reach out to others to ask questions and share knowledge, and this is where libraries can truly shine. The idea of a library being a repository of collective knowledge and endeavour is not new, but why can’t this approach be used to see libraries as spaces within which we can explore privacy enabling technologies, discuss the role of surveillance in our modern and digital democracy and learn more.
Perhaps then we can renew privacy’s position as a fundamental right, perhaps then we can reclaim the Internet as a space for exploration as opposed to a space of observation, perhaps then we will know how much of us is up for grabs.
These are a great deal of perhaps, but it gives us a place to start and that is better than nothing.
This blog post was contributed by Ian Clark from the Informed team and Lauren Smith, a Research Associate at the University of Strathclyde.
The news that libraries may be forced to hand over personal data to the security services raises serious ethical questions regarding the confidentiality of what people choose to read. A fundamental ethical principle of the library and information profession is the freedom of individuals to access information and read whatever they choose in confidence. The Chartered Institute of Library and Information Professionals (CILIP) is very clear on the obligations to library users. Its ethical principles state the need to demonstrate:
Commitment to the defence, and the advancement, of access to information, ideas and works of the imagination.
Such a principle is undermined if the government is known to be able to access data on the “information, ideas and works of the imagination” that individuals access. The chilling effect of such a move would inhibit individuals from accessing whatever they want without fear of reprisals from the state.
1.2 It is the responsibility of individuals using Public Access Points to decide for themselves what they should, or should not, access.
1.3 Those providing Public Access Points should respect the privacy of users and treat knowledge of what they have accessed or wish to access as confidential.
The proposals laid out by Theresa May seriously threaten these basic ethical principles. If the state is able to access data on what individuals have been reading in public libraries their freedom to read and access what they choose is seriously compromised.
Ironically, these proposals come at a time when libraries and librarians in other parts of the world are emphasising the importance of ensuring that individuals can access what they wish in confidence. In December last year, librarians were in uproar when Haruki Murakami’s borrowing record was published in a Japanese newspaper. In response, the Japan Librarian Association re-affirmed that:
“Disclosing the records of what books were read by a user, without the individual’s consent, violates the person’s privacy.”
In the face of similarly intrusive legislation (the PATRIOT Act) in the United States, some libraries have begun purging records of inter-library loan requests to protect users’ privacy. As yet we have not seen comparable moves by the profession in the UK, but the increasingly aggressive rhetoric from the government regarding what and how individuals seek out information is clearly in conflict with the values we espouse as a profession.
Libraries should not distinguish between books and web activity. What individuals read and access online should be as private and as confidential as their book borrowing habits. Although we do not have the constitutional protections to intellectual liberty that American library users are afforded under the First Amendment, both professional organisations (such as CILIP) and political bodies (Council of Europe) are very clear that what a user accesses in a library should remain confidential. The proposals put forward by Theresa May threaten these basic principles of intellectual freedom and liberty and will put intolerable pressure on public libraries. Our government’s desire to undermine these principles is not only dangerous, but will also seriously undermine the bond of trust between public libraries and their users.
Simon Barron (@SimonXIX) explains what DDoS is, how it is used and debunks some myths about it.
On 7 December 2015, the academic network provider, Janet, suffered a DDoS attack which partially brought the service down (Martin, 2015). Workers in Higher Education institutions across the UK (and organisations that have their internet access provided by server farms in HEIs) suddenly found their internet connections weren’t working probably while Jisc engineers scrambled to fend off the attack and restore service.
A DDoS (Distributed Denial of Service) attack is a means of bringing down a server (or a cluster of servers) by flooding it with requests. In normal communication on the web, a local computer (i.e. a Windows desktop PC) sends a request to a server (i.e. by pointing Firefox to e.g. http://theinformed.org.uk/) to serve up a webpage; the server then responds by sending the data (i.e. HTML and CSS files) that makes up the webpage. A DDoS attack sends thousands of requests to a server continually from multiple IP addresses such that the server cannot respond: either from using up all the server’s CPU processing power at once or by filling up the short-term RAM memory of the server causing it to crash.
DDoS (sans the word ‘attack’) can be a valid method of testing the integrity of a server. A developer setting up a web service can perform load testing by incrementally increasing the number of requests sent to a page until it falls down: this gives you the total number of users that should use the service at any one time. A tool like Bees with Machine Guns (https://github.com/newsapps/beeswithmachineguns) uses the power of the Amazon Web Service to perform stress testing.
However DDoS is more effectively lodged in the public consciousness as a weapon of hackers. DDoSing without the express consent of the owner of the server is illegal. DDoSers in the USA have been prosecuted under the Computer Fraud and Abuse Act (CFAA) (Coleman, 2014). This weaponised version of DDoS is usually done through botnets. “A botnet is essentially just a collection of computers connected to the Internet, allowing a single entity extra processing power or network connections toward the performance of various tasks including (but not limited to) DDoSing and spam bombing… Participants whose computers are tapped for membership in a botnet usually have no idea that their computer is being used for these purposes. Have you ever wondered why your computer worked so slowly, or strangely? Well, you might have unwittingly participated in a DDoS.” (Coleman, 2014) A computer can become part of a botnet by being infected with a piece of malware.
Another method is a more voluntary form of DDoS using the program Low Orbit Ion Cannon (LOIC), an open-source load testing tool (http://sourceforge.net/projects/loic/). Like its science-fiction namesake, LOIC is simply pointed at a target and then fired: the user enters the IP address of a server and then clicks the large button labelled “IMMA CHARGIN MAH LAZER”. When co-ordinated, a mass group use of LOIC can send thousands of requests at once. However the use of LOIC is not secure: assurances – from the Anonymous #command channel and journalists from sites like Gizmodo – that IP addresses of LOIC-attack participants can not be logged on a targeted server are wrong: “The DDoS’ed site can still monitor its traffic, culling and keeping IP addresses, which can be subsequently used to identify participants.” (Coleman, 2014)
A DDoS attack is fairly simple hacking: it does nothing more than disrupt a service in a way easy to recover from and temporarily take down a public face of a company.
The real issue is what hacking can be done under the cover of a DDoS attack. While server defences are weakened by devoting processing power to dealing with requests and while sysadmins are distracted fending off the attack, a hacker can covertly perform more malicious hacks like accessing data in a server’s database or changing passwords or planting code or simply ‘rm -rf /’-ing the whole server.
The impetus for this kind of malicious DDoS attack can be political or simply, in the words of hackers, “for the lulz” (Coleman, 2014). DDoS as a tactic for political activism has become associated with the trickster hacker collective, Anonymous, who have used it to take down the websites and servers of various companies or groups. Since DDoS can be used to crash a server, it has been used to take down websites from the Church of Scientology’s site to Sony’s Playstation Network to PayPal (Coleman, 2014).
The use of DDoS as a tool for political activism is hotly debated among hackers. Groups like the Pirate Party and AnonOps (operational planners of Anonymous) disagree about the ethics and efficacy of using DDoS (Coleman, 2014). On one hand are those who argue that DDoSing is nothing more than another “large-scale, rowdy, disruptive [tactic] to draw attention and demand change.” (Coleman, 2014): no different fundamentally from a sit-in protest, a direct action blockade, or an occupation of a physical space. The only differences are squatting on digital space rather than physical space and the increased numbers of participants that can be involved in a protest via DDoS. Anonymous also argue that the visibility of the action and its ability to get the mainstream media’s attention justifies its use to highlight political and social justice issues. In 2013, Anonymous posted a petition on whitehouse.gov asking that DDoS be recognised as a legal form of protesting, the same in kind as the Occupy protests (whitehouse.gov, 2013).
On the other hand, other hackers invoke principles of free speech and freedom of information to decry the use of DDoS. With an absolutist view of free speech, taking a website offline is depriving the company or group that owns the website from expressing their views (via the medium of webpages) and also depriving the public of information. Oxblood Ruffin of the Cult of the Dead Cow hacker collective reasons that “Anonymous is fighting for free speech on the Internet, but it’s hard to support that when you’re DoS-ing and not allowing people to talk. How is that consistent?” (Mills, 2012) When using a botnet, there are also ethical concerns in harnessing someone’s computer without their consent to participate in illegal activity.
On the other other hand, a “more dynamic view of free speech could take power relations into account. By enabling the underdog—the protester or infringed group—to speak as loudly as its more resourceful opponents (in this case, powerful corporations), we might understand a tactic like DDoS as a leveler: a free speech win.” (Coleman, 2014)
In a sample of a chat log from anIRC chatroom, #antiactaplanning (quoted in Coleman, 2014), Anonymous members debated the use of DDoS on a US Government website:
<golum>: Whatever, listen. I’ve heard all the arguments for NOT ddosing. But the truth is we need to wake them up.
<golum>: I understand that ddosing could potentially harm our cause.
<golum>: But I think the risk is worth it.
<fatalbert>: well i as for myself disagree therefore im not helping with ddos
<golum>: We need attention
<+void>: OMG ITS THE ANONYMOUS, THE ONLY THING THEY DO IS DDOS, OMGOMGOMOGMOMG LETS MAKE ACTA PASS ON POSITIVE
<golum>: matty—how did contacting the politicians go?
<BamBam>: Yeah I’ve always kinda hated ddos
<golum>: Look. i’ve heard the arguments I just wanted to say, we should do this.
It’s unclear why Janet, the network enabling internet access for UK HEIs, came under attack this week. At the same time, the Jisc website received a direct DDoS attack as well (Jisc, 2015). It’s worth noting that although internet access through Janet in the UK was disrupted, users were still able to access the wider web by routing their traffic outside of the UK network either through a VPN like Bitmask (https://bitmask.net/) or through the Tor Project’s Tor Browser (https://www.torproject.org/). Such tools are often mistakenly perceived as being used exclusively by hackers, those accessing the ‘Dark Web’, criminals, or terrorists. Following the November 2015 Paris attacks by Daesh, the French Government have openly discussed banning the use of Tor Browser in the same way as Iran or China (Griffin, 2015). In reality, online privacy tools have legitimate and valid uses for defense in computer security: whether against DDoSers or governments and corporations conducting mass digital surveillance.
Whether morally legitimate or not, DDoSing is an effective tactic for hackers and other political activist groups. The core strength of DDoS is that it exploits a weakness in the fundamental principle of the internet: computers using telecommunications networks to request data from one another.
Coleman, G., 2014. Hacker, hoaxer, whistleblower, spy: the many faces of Anonymous. London: Verso.
Ever since the emergence of the internet, there have been concerns about those excluded as services increasingly move online. Commonly referred to as the “digital divide”, this exclusion has manifested itself in two distinct ways: lack of access (first level) and that of skills (second level). Progress has been made with the former in recent years as the numbers of those without internet have steadily declined, but the latter has proven far more difficult to address.
Over the course of the past two years, the number of people that have never accessed the internet has fallen by approximately 15% (from just over 7m in the first quarter of 2013 to just under 6m in the equivalent quarter in 2015). However, a lack of internet skills is still stubbornly high. In a BBC online skills survey last year, the corporation found that 20% of UK adults lacked basic online skills. Indeed, the overall lack of skills (particularly across the poorest households) remained unchanged between 2013 and 2014. These findings have been reinforced by a recent report by Go.On UK that found that more than 12m people “do not have the skills to prosper in the digital era”.
Traditionally, public libraries have been a key mechanism to close this so-called divide. Indeed, the People’s Network was borne out of this effort to close the gap and help more people get online. Libraries were seen as the ideal place to provide the support required. They offer a neutral space free from corporate influence, and are staffed by individuals trained to seek out and evaluate information. However, recent years have seen widespread library closures and cuts to staffing levels that have seriously impeded the services they provide. As a result, the libraries crucial role in bridging the digital divide has been severely undermined.
Whilst the role of libraries in tackling the digital divide has diminished, private sector organisations have stepped in to fill the gap. In March 2015, for example, BT and Barclays announced that they were going to work together to connect more people to the internet and to provide support to help people develop the skills they need. In order to provide this access and support, BT and Barclays would be working with local authorities to deliver the initiative in public libraries and community centres in England.
The delivery of this initiative is particularly interesting given the role of public libraries in this area and begs the question why such an initiative needs the direction of either Barclays or BT given the support public libraries have provided. However on the surface, in terms of closing the digital skills gap, there appears to be some benefit in their involvement. For example, Barclay’s Code Playground initiative is potentially a useful way to teach children how to code – a skill that is increasingly regarded as an important one for children to develop (although there are differingviews on the extent to which coding itself should be prioritised). However, this option is only available if they can visit a Barclays branch during a weekday with an adult and can provide a laptop. An option, therefore, not available to those without a computer at home or those whose circumstances prevent a visit to the bank on a weekday.
Initiatives such as the Code Playground could, of course, be delivered effectively by public libraries should they have the funding and staffing to make it happen. Indeed, with public libraries being far more accessible to the general public (and a lot more child-friendly) there is a real opportunity here for libraries to develop the digital skills of the next generation and help the UK lead the world in bringing through the next generation of coders. Delivering such an initiative that requires individuals to visit a branch and bring expensive equipment with them is perhaps not the most effective way of addressing the deeply entrenched digital skills divide.
The move to enlist Barclays and BT into the drive to tackle the digital skills gap emerged as an outcome of the Digital Inclusion Charter, where 38 signatories committed in December 2014 to reduce the number of people who are offline by 25% by 2016. The public library scheme will be run by Barclays Digital Eagles and BT’s Digital Friends. BT volunteers will be “working with trained Barclays staff – called Barclays Digital Eagles”, although it is difficult to determine who BT will employ as “Digital Friends” to deliver this initiative.
Furthermore, there is a lack of clarity regarding Barclays “Digital Eagles”: are they Barclays staff that have volunteered for these roles and been given extra training? Are these people experts who were recruited specifically to provide this service in libraries? Or are they simply bank staff doing this as an additional duty? It is unclear from the information currently in the public domain etc how Barclay’s will deliver this service. What we do know is that of the 377 UK-wide vacancies available at Barclays in August 2015, none have the title “Digital Eagle”.
Problems presented by the BT/Barclays partnership
There are a multitude of problems presented by this tie-up between BT/Barclays, and public libraries in England.
The encroachment of a commercial enterprise into a neutral public space such as public libraries is fundamentally at odds with the ethos of freely providing access to services for all.
The attempt by commercial enterprises to take over the roles of public servants: on what basis are volunteers working on behalf of a commercial body able to better provide the service than trained staff/volunteers working in public libraries?
How long is this funding going to last? It’s stated to be a two year project, but what happens when it ends? How will Barclays, BT and the government ensure that the development of digital skills continues after the project comes to a close?
Hardware – with Barclays Code Playground scheme (designed to help teach children to code), children have to bring their own laptop to the sessions. As this pairing of BT and Barclays seems to cover the internet connection (BT) and skilled support (Barclays), has there been any consideration regarding the provision of hardware? All three are required to effectively tackle a lack of digital skills, how will they ensure all three are available? Or is it only accessible to those who can provide the equipment?
Staffing – are commercial enterprise staff going to be allowed to use a public, neutral space? What will be the checks and controls on suitability of Barclays staff to work with often vulnerable users, such as Disclosure verification? Can we be sure that the staff provided by Barclays/BT will adhere to the highest levels of trust and privacy, meeting the standards expected of professional librarians?
Will BT or Barclays be allowed to use this neutral public space to promote their own commercial enterprises? Will there be any requirement for them to be entirely neutral when dealing with issues in terms of communications and banking?
When will this service be available? Is it only during dedicated sessions, as with those Barclays currently hold in their branches? Or will it be available during library opening hours, whatever they may be? Will BT/Barclays staff be available on evenings and weekends when the library is open?
Confusion over availability – digital TV means viewers across the UK will be seeing adverts for this service, which is actually only going to be available in England and Wales. This creates unrealistic expectations in potential service users of the resources available to them in their location, which their local public library staff will have to deal with.
Before the commencement of such an initiative, some clarity on these issues would be helpful and made clear to the general public.
Comment from CILIP – the professional body for librarians
To date, CILIP have not made any official comment on the implications of this collaboration between BT and Barclays, restricting their references to the announcement to a single tweet linking to a story published on The Bookseller website on 19th March. They also tweeted a link to another Bookseller story about the official launch of the pilot scheme on the 22nd July, but have not voiced any official concerns about this intrusion of commercial enterprises into a public space. Whilst there has been no comment to date, a representative from CILIP has attended all the meetings of the overseeing body, the Leadership for Libraries taskforce and have therefore been aware of the developments. It’s possible, of course, that all of the concerns raised above have been put forward by CILIP and these have been factored in to the development of the project.
The implementation of the scheme
The launch of the trial scheme took place on 22nd July 2015. As most of the publicity was on Government websites and the sites of the companies involved, the launch seems to have gone somewhat under the radar, aided by the lack of commentary by the professional body.
The press release mentions 100 libraries and community centres being involved in the scheme. The initialreports stated the scheme would cover “57 libraries and 13 community centres across the country. A further 10 sites, including a care home, a charity home and a homeless centre will also be provided with free wi-fi” – a total of 80 sites. Details of the remaining twenty sites are not currently clear which begs the question, what’s happened to involvement of the care home, charity home and homeless centre in the scheme? BT state that “more than 100 libraries and community centres” will deliver the project. The first Leadership for Libraries meeting indicates that the funding is for “80 libraries and 20 community centres in areas of social deprivation”, but in a later meeting the scheme is proposed to cover “100 sites including over 50 libraries”. Thirty libraries appear to have been dropped from the scheme, but there is no indication as to why.
Trying to locate specific detail about this scheme appears to be particularly difficult. How many libraries and other locations are actually involved in this scheme? Where can we find out which ones they are, and where they are? Why is there no consistency in the messages being published about this scheme? One of the risks of commercial enterprises being involved in public spaces and services is that the entire culture of a corporate body is focussed on protecting its own sensitive commercial secrets – a culture at odds with public body accountable to the public. The result seems to be what we have here with the BT/Barclays tie-up: a project that is both difficult to verify and one riddled with conflicting information.
In contrast to the above approach of inviting commercial enterprises to take possession of elements of a public space and services, an alternative project has also recently been launched in England by Arts Council England (ACE). As part of the drive to increase skills, ACE have announced the availability of £7.1 million in funding for public libraries in England to access, which will run for six months and help enable free wifi access across all public libraries in England. Confusingly though, that initiative is also a “key development” of the Leadership for Libraries Taskforce in parallel to the BT/Barclays project.
It would be helpful if BT, Barclays, and the Leadership for Libraries Taskforce address the issues raised above, and communicated with greater clarity about the nature of the scheme and how it will be delivered. Answers to the following questions would be particularly beneficial in terms of the roll-out of this scheme:
How many public libraries are involved in this initiative? Which specific ones are they?
What restrictions are there on the employees of commercial enterprises while in a neutral public space? Are they allowed to promote their products, or try and gain a commercial advantage by attempting to gain clients while positioned within public libraries?
Was any analysis done on the viability of asking commercial enterprises to donate funds to public libraries to allow public library staff to provide the services which those commercial enterprises now wish to provide in libraries, prior to BT and Barclays being given permission to place their own staff within those spaces?
What protections are in place for the vulnerable users of public libraries who make use of the resources provided by the BT/Barclay partnership? Both in terms of the checking of the commercial participants in this scheme, and ensuring that no inappropriate promotion of products is being undertaken.
Who is responsible for the security of the machines which participants will use for the initiative, e.g. ensuring that no malware is installed on the machines involved.
What is the long-term plan for supporting this approach to developing digital skills in the general public, once this project is completed?
The following article was contributed by Tim Turner, trainer & consultant on Data Protection, FOI, PECR and information rights.
“Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.”
Donald Rumsfeld’s comment on the fact that sometimes we don’t know what we don’t know is notorious for its lack of clarity, but it is a very helpful summary of most massive data protection or security incidents. Take the recent TalkTalk debacle, in which the telco’s website was hacked, and a quantity of personal data was accessed and presumably stolen. We don’t actually know much more than that: we don’t know how the hack happened, we don’t know for certain who committed the act, we don’t know how much data has been stolen and most importantly, we definitely don’t know whether any laws have been breached.
There is a lot to keep an eye on. TalkTalk’s hastily assembled FAQs was emphatic that the Data Protection Act has not been breached by this incident, and the company has generally been at pains to hashtag every tweet with #cyberattack, painting itself as the victim. Meanwhile the company’s Chief Executive Dido Harding’s headlong rush into every available TV studio has impressed some with her frank admission that TalkTalk could have done more to protect customer data, but thrown the ‘no breach claim’ into doubt.
Data Protection law is built on eight principles, and the seventh principle requires that organisations put in place “appropriate” levels of technical and organisational security. The fact that whoever hacked the TalkTalk website has committed a crime in doing so does not absolve TalkTalk of responsibility. The 7th principle explicitly requires measures to prevent unauthorised and unlawful processing of personal data, so anyone whose website might be the gateway to personal data has to have proactive protections to repel a hacker. Several companies have already fallen foul of the 7th principle and received substantial monetary penalties after falling victim to hackers, including Sony Playstation Online, the British Pregnancy Advisory Service and the travel company Think W3. In each case, a criminally-motivated hacker was assisted by inadequate security and lack of testing.
All sorts of considerations can increase the burden of security. If an organisation is large and more high-profile, if they hold a large amount of personal data, or if a hack might expose sensitive data that might lead to harm, the measures must be progressively more robust. All three of these factors apply to TalkTalk. Harding has claimed that TalkTalk’s security was “head and shoulders” above that of its competitors, and if that can be proved, TalkTalk are off the hook. But with a Chief Executive who has already admitted that their security might have been found wanting, and the arrest of a 15 year old boy in connection with the hack (putting paid to some of the more lurid theories about some kind of Russian / ISIS / Cyber-Jihadi / SPECTRE agent being the perpetrator), presumably we know for certain that the Information Commissioner will act swiftly and decisively to enforce the law?
Well, not quite. Data Protection does not allow for summary justice. The Information Commissioner needs to prove at least on the balance of probabilities that there were appropriate measures to prevent hacking that TalkTalk should have had in place but didn’t. TalkTalk will have to be able to make their case, and the ICO will have to listen. The DP framework allows for the possibility that TalkTalk can be hacked and yet no breach has occurred – the breach is not the incident, but the absence of measures to prevent it.
The omens are nevertheless not auspicious. As well as Harding’s unwise comments, TalkTalk’s track record is troubling. In 2008, the company received an enforcement notice from the ICO, requiring them to stop such basic errors as customers being able to see each others’ records online. Much more recently, TalkTalk’s security was audited by the ICO, and in a break with the normal practice, TalkTalk refused consent for the executive summary to be published (despite other organisations allowing quite negative summaries to go online).
The most important thing that we do know is that the TalkTalk hack does not just put the company in the frame. The Information Commissioner is better at enforcing on security matters than nearly any other aspect of Data Protection but their appetite for taking on large organisations is inconsistent: there may be £250,000 penalties for Sony, but until recently, only unenforceable undertakings on a largely unrepentant Google. Many activists can recall big Data Protection scandals like press misuse of private data (which the ICO discovered but did not tackle) or secret trials of the Phorm internet tracking software (which some suspect went unpunished because the trails were carried out by BT). If the ICO fails to act, it will need an extremely persuasive justification to calm the outrage that will likely follow, and we simply don’t know if such an explanation exists, whatever the law says.
A recent ruling by the European Court of Human Rights (ECHR) could have ramifications for all of those with websites enabling comments to be posted by readers. The Court ruled that an Estonian news site (Delfi) may be held responsible for anonymous comments that are allegedly defamatory. A representative of digital rights organisation Access argued that the judgement has:
“…dramatically shifted the internet away from the free expression and privacy protections that created the internet as we know it.”
A post by the Media Legal Defence Initiative listed the main reasons why the court came to this decision, which included:
the “extreme” nature of the comments which the court considered to amount to hate speech
the fact that they were published on a professionally-run and commercial news website
the insufficient measures taken by Delfi to weed out the comments in question and the low likelihood of a prosecution of the users who posted the comments.
The timing of this is particularly relevant for me following the coverage of a tragic local incident. Following an attempted suicide by a local woman that led to the death of a man attempting to rescue her, a local news website reported the incident in relative detail, including statements from witnesses (although withholding, at the time, the names of the individuals involved). Sadly this led to a number of insensitive and inappropriate comments being posted about the woman who tried to take her own life. Upon approaching the publishers to request the closing of comments for such a story, I was told that I should report individual inappropriate comments rather than expect them to remove the comments thread altogether.
These two stories raise a number of interesting issues. Who is ultimately liable for content that is published online? Is it the responsibility of the host website to deal with “extreme comments”? Is it the responsibility of the individual who posts the comments? Should there even be any restrictions on what people post online? Should we just accept that everyone has a right to free expression online and that hurtful comments are just manifestations of free expression?
What is your view?
If you’ve got a perspective on the judgement by the ECHR, who should ultimately be responsible for comments posted online or whether any limits in this area are an unreasonable limitation of free expression and would like to write about the issues for Informed, we’d like to hear from you. Articles should be 800-1000 words (although this is flexible) and our normal moderation process applies. If you are interested in writing for Informed, please contact us via submissions[at]theinformed.org.uk.
If you require any support, The Samaritans are available 24hrs a day, 365 days a week to provide support.
Net neutrality is the principle that all packets of data over the internet should be transmitted equally, without discrimination. So, for example, net neutrality ensures that your blog can be accessed just as quickly as, say the BBC website. Essentially, it prevents ISPs from discriminating between sites, organisations etc whereby those with the deepest pockets can pay to get in the fast lane, whilst the rest have to contend with the slow lane. Instead, every website is treated equally, preventing the big names from delivering their data faster than a small independent online service. This ensures that no one organisation can deliver their data any quicker than anyone else, enabling a fair and open playing field that encourages innovation and diversity in the range of information material online. The principles of net neutrality are effectively the reason why we have a (reasonably) diverse online space that enables anyone to create a website and reach a large volume of people.
Why should we in Europe be concerned if this is a US issue?
Whilst there has been little public debate in the UK or Europe around the issue of net neutrality, it is becoming an increasingly important issue. Earlier this year, the Latvian government (currently holding the European presidency) proposed that there should be exceptions to net neutrality rules, particularly when their networks face “exceptional…congestion”.
In March, a majority of EU Member States voted in favour of changing the rules to bar discrimination in internet access but, crucially, the rule changes would allow the prioritisation of some “specialised” services that required high quality internet access to function. This was reinforced by the Chief Executive of Nokia who argued that some technologies (such as self-driving cars) will be hindered so long as providers have to abide by net neutrality principles.
A recent report by Web Index found a mixed bag when it comes to net neutrality regulations across the EU. The report noted that whilst the Netherlands scored eight out of a possible ten for net neutrality, countries such as Italy and Poland scored only 2. In a blog post for the European Commission, Tim Berners Lee argued that binding net neutrality rules would “raise the bar for the performance of lower ranking countries, ultimately enabling Europe to harvest the full potential of the open Internet as a driver for economic growth and social progress”.
Will regulation solve the problem?
Whilst tighter regulation can help to oblige telecoms companies to adhere to the principles of net neutrality, it doesn’t mean to say that the problem will be eliminated. As with all laws, their existence does not eradicate an issue, it merely minimises it. For example, the Authority for Consumers and Markets in the Netherlands recently fined the country’s two largest operators, KPN and Vodafone, for blocking services and zero-rating data for subscribers to HBO. It’s clear that violations will continue to occur, but arguably there will be fewer once regulation is in place.
Google have been largely quiet publicly when it comes to the net neutrality debate in recent years, although they had previously been very vocal on the issue and have lobbied the FCC in the past.
Why should I care about net neutrality?
Net neutrality ensures that we have an internet that enables the broadest possible range of views. By ensuring a level playing field, it ensures that no one perspective dominates the internet. If companies are able to ensure their data travels on the fast lane, then we can be sure that those companies will dominate the landscape because their sites transfer data quickly and efficiently. This will ultimately lead to a narrowing down of sites as people avoid using services where data travels in the slow lane, in favour of those that travel in the fast lane. Big companies will get bigger, small companies will disappear and new companies will not get off the ground without significant sums of money to enable them to compete. The internet thrives on innovation and an abandonment of these principles would seriously impede innovation.
We have also seen in other forms of media what occurs when regulation is too lax. We see in print and broadcast media a decline in media plurality. Certain media outlets have come to dominate the landscape with ownership of popular print and broadcast media. An abandonment of net neutrality rules could lead to the very same decline online. The internet will be dominated by a very few large corporations who provide the vast majority of the content. This is, of course, bad news for those that use the internet and bad news for democracy as a vibrant democracy relies on media plurality to ensure a well-informed electorate.
This awkward cliché, repeated at the end of every BBC news report, signals a crude shift in gear. It seems that ‘The News’ has two parts: ‘the news where we are’ (London-centred politics, war, economics, English premiership football); and ‘the news where you are’ (local and parochial oddities that may entertain the yeomanry but which won’t deflect the ship of state from its mighty progress). Ruthlessly and deservedly lampooned during last year’s independence debate, the phrase came to mind last week as Vint Cerf shared his fears on the evanescence of digital memory and the need to take collective action to counter the pernicious and ubiquitous impact of obsolescence. Reported by the BBC, the Independent, the Guardian and others (mostly from San Jose CA) it would seem that a digital black hole is set to initiate a digital dark age sometime soon. There’s a choice of metaphors but none of them good.
First thing’s first: I don’t have a copy of Vint Cerf’s original remarks so my observations are really only about the reportage. In fact almost anything he might choose to say would have been welcome. It’s undoubtedly true that preserving digital content through technological change is a real and sometimes daunting challenge. Our generation has invested as never before in digital content and it is frankly horrifying when you consider what rapid changes in technology could do to that investment. Vint, as one of the architects of the modern world, is exceptionally well placed to help us raise the issue among the engineers and technologists that need to understand the problem.
We do desperately need to raise awareness about the challenge of digital preservation so that solutions can be found and implemented. Politicians and decision makers are consistently under-informed or unaware of the problem. In fact awareness raising was one of the reasons that the DPC was founded. Since 2002 DPC has been at the forefront of joint activity on the topic in the UK and Ireland, supporting specialist training, helping to develop practical solutions, promoting good practice and building relationships. A parliamentarian recently asked me which department of government will be best supported by all this work (presumably in an attempt to decide which budget should pay for it). I answered ‘all of them’. I am not sure if the question or the answer was more naïve: it’s hard to imagine an area of public and private life that isn’t improved by having the right data available in the right format to the right people at the right time; or conversely frustrated by its absence. Digital preservation is a concern for everyone.
But that’s not the same as saying that a digital black hole is imminent. It might have been in 2002 but since then there’s been rather a lot to celebrate in the collective actions of the digital preservation community globally (and especially here in the UK and Ireland) where agencies and individuals are beginning to wake up to the problem in large numbers. These days we’re seeing real interest from across the spectrum of industry and commerce. Put simply the market is ripe for large scale solutions. It’s easy to focus on the issue of loss, but we can also talk confidently now about the creative potential of digital content over an extended lifecycle.
In January this year the DPC welcomed its 50th organisational member: the Bank of England. It’s a household name but nor is it particularly a memory institution with a core mission to preserve. Other new members in the last year include HSBC, NATO and the Royal Institution of British Architects. They all depend on data and they all need to ensure the integrity of their processes, but they are not memory institutions with a mission to preserve. Any organisation that depends on data beyond the short life spans of current technology – we’re all data driven decision makers now – needs to put digital preservation on its agenda.
If the last decade has taught us anything, it’s that we face a social and cultural challenge as well as a technical one. We certainly need better tools, smarter processes and enhanced capacity which is ultimately what Vince’s suggestion for Digital Vellum is about (though others dispute the detail of his proposal). But this won’t solve the problem alone. We also need competent and responsive workforces ready to address the challenges of digital preservation. Time and again surveys of the digital preservation community show that the skills are lacking and where they exist they are themselves subject to rapid obsolescence. We know that digital skills are crucially short in the UK economy: at the same time as Vint was arguing for Digital Vellum the Chief Constable of Police Scotland had to apologise for having misled parliament because statistics about draconian stop-and-search powers were inadvertently deleted. The nation’s most senior policeman could lose his job because his organisation lacked digital preservation skills. Arguably the lack of skills is a bigger challenge than obsolescence.
Moreover a political and institutional climate responsive to the need for digital preservation would allow us to make sense of the peculiarities of copyright. Those who argue for the right to be forgotten ingenuously assume an infrastructure where you will be remembered: a somewhat populist rush for data protection and cybersecurity is tending to stifle reasonable calls for data retention. This is pretty raw stuff. At the same time as the technology commentators were worrying about technical obsolescence a senior politician was caught deleting content of his own containing comments that now seem ill-judged. The machinations of those who want us to forget might well be a bigger threat to our collected memories than digital obsolescence.
San Jose is lovely in early spring. But there’s a better story about digital preservation where we are.
Do you have something to say on a current issue facing the information world? We’re always looking for new contributions to Informed from the information professional community. If you would like to write something for the site, do drop us a line!
The following post was contributed by Informed team members Jennie Findlay and Ian Clark.
There has been much coverage of the emergence of MOOCs (Multiple Open Online Courses) in recent months, sparking multiple discussions about their usefulness as a new learning experience for a wide variety of users. Their popularity has continued to rise since the first MOOC was launched to the public in 2007, so much so that even high street retailers such as Marks & Spencer have joined in, using the MOOC platform in conjunction with an academic partner in order to deliver a course on “commercial innovation” (a growing trend as MOOC providers begin to focus on providing job-related training). Some MOOC providers are also now beginning to focus on providing “nanodegrees”, designed to focus on training individuals to get very specific jobs. Within a few short years, online searches for learning providers with a physical location have been outstripped by those for online courses.
Of course MOOCs can be excellent learning tools, but as with any other method of delivering information and education, they also have their limitations. Most (free) MOOCs have excellent signup rates, but also an incredibly small course completion rate (averaging only 4% in one study).Those people who are successfully participating in MOOCs are also those people who are most likely to already have an advanced level of education. But are current MOOC offerings just an academic toy for those who are already well educated, and are they bypassing those who are actually most in need of access to expert training and life-enhancing skills? What’s stopping those who could most benefit from gaining skills and education via a MOOC from embracing the opportunity of self education?
Access barriers to MOOC use
There are multiple reasons why those who would most benefit from being able to access the university level training provided by MOOCs are unable to do so.
Access to a reliable internet connection
Access to an internet connection is an essential requirement for involvement in a MOOC, which are, by definition, delivered entirely online. But for many of those people who would most benefit from such a course, those with lower skill and education levels for example, securing access to a reliable internet connection, at an appropriate time, can present a significant barrier to engagement. According to the Office for National Statistics’ Internet Access statistical bulletin, 16% of households in the UK do not have an internet connection. Of those households without internet access, 12% say they do not have access because the equipment is too expensive and 11% say the access costs are too high. Furthermore, in households where the income is below £12,500, only 58% use the internet (lower than middle income households in 2005). It is clear, therefore, that for lower income households, MOOCs do little to broaden access to education and break down existing barriers.
Ownership of a computer/laptop with which to undertake a MOOC
A core requirement of an interactive course is that you have access to the equipment which will enable you to interact with fellow students and your tutor. However, the cost of owning a computer to enable you to undertake the course can be prohibitive for many, which means that their only option to access the course is via their local public library, and the computers available there.
Accessibility of public libraries
To use a public computer for a course of study requires that there be reliable access to that computer for the user. With reduced opening hours in many public libraries, not to mention library closures, being able to find a library open during the times when a MOOC student can visit presents a further significant barrier.
Availability of public computers
Undertaking a course of study, particularly while also working or undertaking other full time duties, requires the ability to set aside specific times for studying which fit around the student’s schedule. A lack of reliable availability of a computer will have an impact on this essential requirement to plan times of study. Many public libraries have restrictions on the availability of their computers, including limiting user sessions to one or two hours at a time, restricting the daily amount of hours a user can have on a computer and, in some cases, charging users for access to the internet. This can make it impossible for MOOC students who rely on access to these computers to schedule their studying time properly.
Reliability and speed of library networks
If a user has managed to both to access a public library and secure a public computer, they may still encounter difficulties engaging with a MOOC. Ageing technology and limited bandwidth availability on library networks means that those that rely on publicly accessible computers may experience greater difficulties than those who do not.
Course online interaction requirements
Many MOOCs encourage or require scheduled interaction sessions with either other participants, or the tutor. These are often in Google Hangouts, or MOOC-based chat rooms. This requirement to be able to be online, and access certain tools, can be difficult to comply with, particularly if the student has problems guaranteeing their ability to be online at a specific time. Many of the internationally based MOOC providers schedule these events in the evenings or weekends, which are particularly difficult times for some students (eg those with families) to get online.
Amount of time needed to commit to completion
There is a need to dedicate substantial time to many of the courses available online. Most Coursera courses, for example, have an estimated workload of 5-15hrs per week. Regardless of the course’s flexibility in terms of deadlines, for some the amount of time required to complete the course is too much. For those on low incomes, the combination of balancing requirements of family and personal development means that the latter will always lose out to the former. In addition, missing one class of 3 hours in one week due to other responsibilities will mean that 6 hours are needed the following week in order to catch up. This becomes an increasingly difficult task if internet and computer access are not guaranteed.
Cost of undertaking some of the commercial MOOCs
The most useful MOOCs are those which provide accredited training, and which will therefore be accepted and respected by potential employers. Although many MOOCs are currently being run free of charge to participants, it does not mean that they will be provided in this way in perpetuity. Currently, the substantial costs of creating and hosting MOOCs are being absorbed by the providers or course creators, but it is unclear to what extent this is sustainable in the long term. Most MOOC providing bodies are commercial entities, and inevitably they will eventually want to create a return on their investment.
Increasing introduction of costs to use public library networks (first hour free or sliding scale of charges for use of equipment)
As mentioned above, certain libraries have begun introducing charges for the use of their computers, usually after an initial free session time. Manchester City Libraries allow free use of library computers for an hour, and after that hour, users are charged a fee of £1.50 per hour. Having to pay for the use of a public computer can be a significant barrier for lower income MOOC students. And this is before we consider the cost of printing out documents, which comes at a price in public libraries. Many MOOC students will need to print out a substantial volume of the course materials in order to consult them when offline, this could significantly increase the financial burden.
The MOOC effect…
Beyond costs and barriers, MOOCs do not seem to be the giant step forward for the open, broad-based education revolution its advocates claim. For example, 70% of those who embark on such a course already have a degree, they are not attracting a huge swathe of people beyond the usual groups who engage with higher education. Even then, it’s questionable whether MOOCs are working for the majority with completion rates usually below 10%.
There are also concerns about the quality of the education provided via MOOCs. As one leading digital innovator in academia, Professor Dan Cohen (who led the development of Zotero) argues:
“We’re trying to do much more than reproducing lectures and quizzes online; we are trying to use the medium to enable new kinds of interpretation and scholarly interaction. So MOOCs seem like a huge step backward.”
Cohen has also claimed that he and other innovators are concerned about what he calls the “lowest-common denominator/old-style learning by repetition aspect to them”. Cohen argues, essentially, that MOOCs take a rather old-fashioned approach to education and that instead of promoting MOOCs as an alternative we should develop digital projects that help students to explore and encourage them to build their own digital projects.
There is also the danger, of course, of a narrowing down of course providers. As is inevitable, providers will merge, take-over competitors or disappear (particularly as some struggle to generate a return on their investment). In such an environment, there is a very real danger of the range of providers declining and the quality of the courses suffering as a result. A move towards one leading player in the market could create serious problems from an educational perspective, particularly if that player has other commercial interests and sees MOOCs as a way to cross-promote. Equally, there is a danger of developing very narrow skills that will either benefit the provider itself or its partners, rather than a well-rounded education that encourages the kind of critical thinking skills that are not considered desirable or profitable within the workplace.
Cohen also points out that most of the successful MOOCs have been maths/computer based and primarily vocational. It may well be that MOOCs are a beneficial education tool, but it may not be across all subjects. Some may lend themselves to the learning styles that MOOCs demand whilst others may be less so. After all, everyone learns in a different way. Some prefer face-to-face tuition, some prefer textual learning, some are happy with videos. For those who perform best when receiving face-to-face interaction (whether that be with peers or teachers), MOOCs will not be a suitable alternative to traditional methods of learning. A mixed approach for such students, however, may be more suitable. San Jose State University, for example, found that a combination of online lectures and face-to-face class time significantly improved the pass rate for engineers.
MOOCs have certainly got a lot of people talking excitedly about their potential to revolutionise education – again, something to support this might be helpful. However, it is not clear yet whether they offer any significant advantages over formal routes of education or that they are quite the revolution that its advocates suggest. There are still a number of barriers that need to be overcome before many can embark on a MOOC, in this respect they differ little from the more traditional method of learning. Higher education has long seen to be the preserve of the few, particularly the elite institutions. There’s little to suggest that MOOCs are any different in this regard.
Indeed, it appears that they erect the same barriers as their traditional counterpart. Cost is a big factor in preventing engagement, as is time. Neither are in abundance for those at the bottom of the economic scale. For those with limited resources (both financial and time), MOOCs may appear as distant as a top university. They are not, as yet at least, proving to be the big game-changer for further education that the advocates may have suggested.
Not only are MOOCs failing to open the doors of education to all, but they are also failing to be revolutionary in how they teach. Rather than taking full advantage of the technology that such a programme should allow, they take a rather conservative approach. As Cohen points out, many universities are already providing more sophisticated methods for engaging students digitally. MOOCs, at present at least, seem to be somewhat behind the curve when it comes to engaging with students in new and innovative ways.
MOOCs certainly appear to be here to stay, but are they really the big step-forward that we have been led to believe? There are still barriers to their use as with more traditional routes of education. They are not accessible for those without the means to engage with them, either financially or in terms of the time they can commit. They seem to offer nothing new in terms of digital learning, in fact they seem some way behind traditional universities in terms of innovation. MOOCs are certainly an interesting development in terms of the delivery of education. It remains to be seen whether they herald a revolution in terms of opening up education and with respect to fully exploiting new technologies in the learning environment. In short, the jury is still out.
Either a subset of the internet – or no internet – is ever accessible to any individual. We are never using the Internet, if that even exists. This is due to a variety of positive and negative mechanisms which include the state, the law, the self, whether you actually have internet access at all, internet service providers, friends, teachers, financial situation, cultural reasons, and your mum.
It might come as a surprise to learn that universities and other higher education institutions throughout the UK choose to block categories of the internet beyond what is required of them by law, from sex and abortion, to naturism, online greeting cards, and marijuana. This is often referred to as “content-filtering” by the companies who perform the blocking, since this sounds less bad.
As information professionals working in the libraries of these institutions, should we care that we are working in an environment which automatically excludes whole categories of the internet? Why does a university pay money to do this, and who decides which categories to block and why?
There are of course parts of the internet which are blocked before the university steps in. The Internet Watch Foundation (IWF) maintains a constantly changing list called the Child Abuse Image Content list (CAIC). Companies which give us access to the internet subscribe to this list and block these parts of web. There are also websites blocked by order of a court. These are usually file sharing sites where major infringement of intellectual property occurs. Try accessing: http://www.thepiratebay.se.
In addition to that which is legally required, many universities license third-party content filtering software such as BrightCloud, Websense, Smoothwall, Bloxx, and Fortiguard . In response to a request for a webpage, the software will either allow or block access depending on which categories the university has selected (and as is the case in some universities, the profile of the individual requesting it).
So what categories are universities choosing to block? Under the Freedom of Information Act, I contacted universities to find out whether any blocking on their networks occurred, and if so, what categories they blocked. Where universities claimed an exemption to disclose a list of URLs due to perceived security implications, subsequent requests were made to ascertain the “categories” by which websites were blocked (i.e. pornography).
Here is the good news: of the 119 higher education institutions I received a response from, 63% confirmed they did not carry out internet blocking . Indeed, some institutions such as Imperial College, pointed out that blocking parts of the internet would be against the principles of academic freedom.
Here is (some) of the bad news. A full list of responses is available on figshare :
10% refused to confirm or deny that they did or didn’t block parts of the internet.
Trinity Laban Conservatoire of Music and Dance blocks the category “abortion” for junior users.
In addition to “adult”, Queen’s University Belfast also blocks “naturism”.
University of Aberdeen and Nottingham Trent University block “marijuana”.
There are a whole host of vague categories such as “questionable”, “tasteless”, “extreme politics”, “violence”, “unethical”, and “intolerance”.
Universities who carry out the category-based blocking described above are keen to point out that they have mechanisms in place where an individual can request that a block is lifted. However, this can often involve seeking permission from the head of department, or submitting an evidence form which justifies your need to access that material; processes which will never be immediate and could be humiliating. Should an adult have to get permission to access porn? Are the number of adult individuals in UK universities getting off on porn on library computers in full view of everyone else endemic enough to warrant this? What about a 15 year old looking up abortion?
The 10% which refused to provide any information at all generally did so by claiming an exemption under section 31(1)(a) of the Act, which permits public bodies to withhold information in the interests of the prevention and detection of crime. My only comment on this would be how surprising it is that 63% of universities didn’t think this.
Universities and these content-filtering companies cannot or will not release very detailed information on these categories, since doing so would provide information for the individuals or organisations behind those URLs to attempt to circumvent their designated classification. We therefore don’t really know much about how companies decide which webpages are “unethical”, or “questionable”.
Universities and their libraries are about creating, disseminating, questioning, and archiving information. The biggest possible subset of the Internet out there in the wild should not be reduced any further by universities according to an arbitrary, “undesirable” set of categories, but offered alongside digital literacy skills which empower students to judge information for themselves, not make judgements on there behalf.
 Some universities freely volunteered the name of their content filtering software. Some, when requested, disclosed this information. Others specifically refused due to “commercial” interest reasons. The list of content-filtering companies here have all been mentioned by at least one university.
 Where a university responded by stating that it only blocked malware/spam sites, this was counted as a “no blocking” response.