WeeklyWorker

29.10.2015

State and internet

The hacking of TalkTalk has reminded us of the risks and compromises associated with the web, writes Yassamine Mather

On Thursday October 22 a hacker managed to bring down the website of the mobile/internet provider, TalkTalk and the company started informing its four million customers that some of their personal details (including bank accounts and credit cards) may have been accessed. A criminal investigation was launched and, according to the company’s CEO, a ransom was demanded. It is claimed that this was the work of a 15-year-old from Northern Ireland.

So how did the attack happen? And had the company failed to detect the severity of the initial attack on its website, on October 21? By all accounts its website came under a DDOS (distributed denial of service) attack on that day. DDOS attacks happen when the bandwidth of a webserver or a public-facing application software server is flooded by requests from a number of servers or individual users. Commercial web-servers use a variety of technologies and scripts (computer codes), all constantly responding to customers’ and viewers’ online requests. When a DDOS attack takes place, the traffic caused by the volume of hacker requests creates such a bottleneck that legitimate customers/viewers are denied service. In other words, the website or application server is so busy that its resources, such as memory and bandwidth, cannot cope with legitimate requests and the site appears to be down.

This is different from other forms of computer intrusion, when the attacker hides their activities in order to collect sensitive information: for example, snooping software used to establish long-term connectivity to a database or scientific/financial computer code. DDOS attacks are highly visible and so are a method favoured by cyberactivists, who often want to advertise their presence.

Originally, experts contacted by the Financial Times claimed the DDOS attack was “a distraction to enact a more specific data breach”. According to Rick Fergusson of Trend Micro, quoted in the FT, “It’s like setting a fire in the front yard, while coming in at the back door”.1Apparently, such hackers are not after publicity: rather they want to access data held in what should be a secure database, where all the data should be encrypted (unreadable unless you have the key to the encryption) and where strict firewall rules apply, so that only servers or APIs (application program interfaces) with solid authentication and secure, non-public-facing internet protocol (IP) addresses can access the database.

So how did this one individual bring down the website of a major mobile/internet provider? He, like most hackers, would not have owned infrastructure big enough to do so, and most probably used a ‘botnet’, made up of zombie computers infected by a Trojan virus, allowing these compromised systems to be controlled by the attacker logging in remotely. This collection of zombies is then used to create the high traffic flow necessary to create a DDOS attack.

In comparison with company espionage or state hacking, this is an insignificant attack. However, the fact that it threatened the personal accounts of four million customers meant the media have paid far more attention to this incident than to the Stuxnet or state-on-state hackings.

Before dealing with those incidents let us review how the website or app works from a user’s point of view. We all use browsers - and, increasingly, mobile and tablet applications - where we enter details of credit or debit cards, bank accounts, and so on. This data should not be kept by the browser, app server or the API. It should travel through the server and go to the provider’s main database.

In accordance with the Payment Card Industry Data Security Standard, most financial companies, banks and insurance companies do not hold this data for longer than a few seconds, passing it with authentication to a safer data server or vault, which is in a private network or intranet (a network whose IPs are not broadcast in the way web and app servers are and protected behind an internet firewall, which is supposed to stop suspicious and illegitimate traffic.

Typically the webserver might keep some record (logs) of the transaction beyond the few seconds it is passing the data on. However, any data kept in such logs must comply with both PCI and data protection regulations: for example, masking 12 of the 16 numbers of a credit card entered by a customer; any other information should be encrypted. Instead of keeping a full address, only some characters should be kept for monitoring and reconciliation of data between various sources.

According to the same Financial Times article, it is still unclear whether TalkTalk’s database had been encrypted and what protections were put in place.

Internet of things

The potential for abuse of the data held on computers and servers is endless. Yet capital is under constant pressure to keep personal information about us - advertising and marketing have entered a new era of personalised campaigns. Campaigns that are directed at patterns of behaviour: recent purchases, and so on. In the last few years we have seen how information regarding past online purchases or enquiries pops up on unrelated web pages when we are sending email, viewing social media or even reading the latest news.

Although some of this information is out of date and irrelevant (if I booked a flight to New York last week, it is unlikely I want to buy a similar flight today), the fact that servers from one company passed this information to Google means my travel plans have been shared. Of course, most companies are now looking at a more intelligent use of this data, so that instead of sending me irrelevant adverts for the same flight, a profile of my behaviour is kept - how many journeys I make, what clothes I buy online, who my friends are on social media, what my social status is. This allows company X to send me slightly more relevant adverts.

However, there are many ways this data can be abused - not just by companies and criminals, but also by the state. Those who follow news about the ‘internet of things’ tell us that it will manage the intersection of gathering data, and its usage. Billions of sensors, apps, mobile devices and computers will gather information, although this data is not worth very much unless there is an infrastructure in place to analyse it in real time (instantaneously).

This is particularly significant when you take into account the number of web users. The year 2008 was significant because the number of connected devices rose above the number of people on our planet when it approached seven billion. In 2015 that figure is 7.6 billion items, with the most popular projection for 2020 hitting 50 billion connected devices (and each device could have many intelligent and internet-connected parts/ sensors).

The ‘internet of things’ revolves around constant machine-to-machine communication, in real time. It uses cloud computing, networks and virtual networks of data-gathering sensors; it is mobile and uses instantaneous connection; and they say it is going to make everything ‘smart’. Smart heating, smart ports, smart cities ... Of course, it becomes a more serious issue if instead of a phone company losing our personal details, a computer network controlling a smart bridge, which is supposed to identify its own structural faults, is hacked, putting lives at risk.

The concept of a ‘smart bridge’ was prompted by the collapse of a structure in Minnesota in 2007, killing many people, because the steel plates used were inadequate to handle the bridge’s load. In response to this, ‘smart cement’ - equipped with sensors to monitor stresses, cracks and warpages - used in the structure of bridges became a real project. These sensors connect to a computer and the internet connects the computer to the world. The same technology can be used on railways, roads, buildings ... indeed any structure.

If there is ice on the bridge or road, the same sensors in the concrete will detect it and communicate the information via the wireless internet to a satellite, which then informs the GPS (global positioning system) device in your car. Once your car knows there is a hazard ahead, it can slow down (if it is auto-piloted) or inform the driver to do so. So the internet is not just about the exchange of information: increasingly it is playing a significant role in every aspect of our lives. Rather than just browsers and apps, sensor-to-machine and machine-to-machine communication will become part of daily life, involving the conversion of information into action. Sensors on the bridge connect to machines in the car: we turn information into action.

When we hand over data to a company, most people realise there is a value exchange involved. The company gets to learn more about us for possible future sales, but in return they are able to deliver our shopping, let us know if there is a problem with delivery or keep us up to date with promotions and special offers. We are constantly told this is to our advantage and our data is safe. Of course, there are some data collection guidelines that companies and public bodies must abide by. However, it is a different ball game when it comes to ‘things’ collecting data.

For businesses, governments and consumers alike the internet of things brings with it a great deal to look forward to: better healthcare through remote sensors, and better ways of targeting customers. However, there are concerns that, once machines start monitoring both us and the environment we operate in, we are handing over a lot of data without perhaps realising it and, despite the long list of watchdogs and regulators, there is growing concern about how data is collected: our personal details, spending patterns and social media interaction (what is called our ‘internet footprint’) can be used or abused.

No doubt smart energy meters, advertised as a great way of reducing energy bills, have their benefits. However, not only the police in your own country, but secret agents from Russia, North Korea or - dare I say it? - Iran can potentially invade the energy company’s internet link and at any time know where you are, when you will get home and how warm you want your house to be.

In London or New York they will tell you that the danger occurs when that data starts crossing state boundaries: different countries might fail to adhere to the same security and privacy standards applied in the European Union or United States. Even in terms of its own interests capitalism is not using the existing data obtained from sensors very efficiently. The pursuit of immediate benefit (in the cases of most companies, increased revenue) often takes priority, but there can be no doubt that the connection between sensors, servers and applications will impact upon ever more aspects of our lives. The question is, who will control the data, how will it be used and who will benefit from these technological advances?

The cloud

These days most CEOs might tell you that their company is in ‘the cloud’. But if you asked them how it works or even what it actually is, few would know. So what is ‘the cloud’ and why is it important?

Cloud-based applications are the key to using data gathered by sensors and systems. The internet of things will not function without cloud-based applications to interpret and transmit the data coming from all these sensors. The cloud is what enables the apps to work for you any time, anywhere. It is what allows you to access Facebook on your mobile phone. In simple terms, cloud computing means storing and accessing data and programs over the internet instead of on your computer’s hard drive or the server in your company’s building.

When companies talk of the cloud they usually mean choosing to implement ‘software as a service’ (SaaS), where the business subscribes to an application it accesses over the internet. There is also ‘platform as a service’ (PaaS), where a business can create its own custom applications for use by all in the company, not forgetting the mighty ‘infrastructure as a service’ (IaaS), where players like Amazon, Microsoft, Google and Rackspace provide a backbone of servers that can be ‘rented out’ by other companies (for example, Netflix is a customer of Amazon’s cloud services rather than running its own servers). These companies use huge data-gathering data sets which are so complex that traditional data processing applications are inadequate. Challenges include analysis, capture, data curation, search, sharing, storage, transfer, visualisation - and, of course, information privacy.

All this is referred to as ‘big data’. The term often refers simply to the use of predictive analytics, or certain advanced methods to extract value from data, and seldom to a particular size or data set. Accuracy in big data may lead to more confident decision-making and better decisions can mean greater operational efficiency, cost reduction and reduced risk.

‘Big data’ usually includes datasets with sizes beyond the ability of commonly used software tools to capture, curate, manage and process data within a tolerable elapsed time. Analysis of datasets can find new correlations, to “spot business trends, prevent diseases, combat crime and so on”.2 Scientists, business executives, practitioners of media and advertising and governments alike regularly meet difficulties with large datasets in areas including internet search, finance and business informatics. And there are certainly limitations in e-science for meteorologists, physicists, environmentalists and many others.

The growth in the size of datasets has in part arisen because they are increasingly gathered by cheap and numerous information-sensing mobile devices: aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification readers and wireless sensor networks. The world’s technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; the challenge for large enterprises is determining who should own big data initiatives that straddle the entire organisation. All this will have to rely on new software which is constantly evolving and becoming more automated.

Privacy

Last year, employees of a software company in Sweden implanted chips in their wrists, which included a pin code used to activate the company photocopier. Undergoing minor surgery instead of just remembering a four-digit pin is a pretty daft idea - you would have to be quite a tech enthusiast to want to do it. After all, we know how often such pins can be changed ...

But, as The Guardian reporter pointed out, “this news story wasn’t just about privacy and new technologies, and how ‘we’ll all soon be doing it’. This story was about power: who has it, who doesn’t, how it is used. And the internet of things, too, is about power”.3

Most of the ‘things’ in the ‘internet of things’ are focused on supply chains and on machine and system performance, not on consumers. If we are looking at a world where in less than five years time, 100 billion devices are connected to private networks or to the internet, the data they collect and deploy, the way this data is manipulated by complex algorithms, how systems interpret and use this data will affect everything we do, how economies function, how states operate and maybe how warfare will be waged.

On one end of the scale we have sensor networks, proprietary and open-source protocols and standards, and the competition of IT giants like Apple, Google, Cisco, Oracle, SAP and GE. The idea that social media and the internet will guarantee freedom or help political activism is naive and based on limited understanding of these corporations.

As in the case of robots, new hardware, software and platforms shooting up around the internet of things have the potential to do good, but the way they are used will depend on who controls the internet and who controls the root servers. The internet of things will be part of an unequal society: markets will dominate and revenue will decide how it is used. The internet of things promoted by major companies is aimed at increasing revenue, gathering data for long-term profit. Its goal is to have a full profile of everything we do, to reduce the time we spend searching and thus allowing us to quickly identify commodities. Time and time again we will choose to sacrifice our own privacy for the sake of convenience or the myth of bargains. At the end of the day, we are complicit in much of the data-gathering performed on us.

In addition to being faced with pages of legal terms and conditions, where opting out means not buying the airline ticket, the service or commodity you want, US researchers found that individuals accept online and physical tracking by businesses because they believe that if they refuse it will happen anyway:

… people feel they cannot do anything to seriously manage their personal information the way they want. Moreover, they feel they would face significant social and economic penalties if they were to opt out of all the services of a modern economy that rely on an exchange of content for data. So they have slid into resignation.4

Social media

In January 2010 Facebook announced plans to build its own datacentres, beginning with a facility in Prineville, Oregon. It currently leases space in different datacentres in the US - typically using between 2.25 and 6 megawatts of power capacity and between 10,000 and 35,000 square feet of space.

Earlier this year, Facebook was talking with six major media companies about a deal allowing it publish their content directly on FB users’ social media pages: in other words to avoid the link-clicking we all do when a ‘friend’ shares an item with us. So, as the internet gets busier and speed becomes more important, the no-wait links of major media outlets will enjoy an advantage over the ‘alternative’ of social media.

Uber, the world’s largest taxi company, owns no vehicles; Facebook, the world’s most popular social medium, owns no content (although now it wants to host content from other media); Alibaba, the world’s most valuable retailer, has no inventory and Airbnb, the world’s largest provider of accommodation, owns no real estate.

Some would say that the online travel agents who dominate travel sales but own no airlines, no hotels, etc will also have no concerns about passenger security, plane maintenance, airport fees and so on. Since they make profits on fixed capital owned by others, one could say they are true examples of parasitic capital, and to a certain extent they are growing.

As I wrote earlier, we should have limited expectations of social media and other forms of mass communication playing a long-term role in terms of mobilisation or organisation. As soon as such media become a threat, switching them off will take milliseconds. But it is worth noting the example of the well-known Anonymous, a loose association of computer ‘hacktivists’ who support Wikileaks and the Occupy movement. In February 2015, following the attack on Charlie Hebdo’s offices in Paris, Anonymous brought down Islamic State’s main website. They were also responsible for temporarily closing down the websites of the Church of Scientology, the Brazilian government and sponsors of the 2014 football World Cup, as well as government websites in the United States, Israel, Tunisia, and Uganda. They have targeted copyright protection agencies and major corporations including MasterCard, Visa, PayPal, and Sony.

States

Most people using Facebook to post status updates, chat with friends and share photos would never imagine they could be the target of government spying.

Yet earlier this week Facebook announced it will alert users if it believes governments and their agencies - whether the US National Security Agency or the People’s Liberation Army in China - are actively spying on their profiles: “We will notify you if we believe your account has been targeted or compromised by an attacker suspected of working on behalf of a nation state,” said Alex Stamos, chief security officer. “We do this because these types of attack tend to be more advanced and dangerous than others, and we strongly encourage affected people to take the actions necessary to secure all their online accounts.”5

In some respect what was predicted in some science fiction novels has become reality, and we have already witnessed cyberwarfare, albeit limited, in the use of the Stuxnet worm and similar endeavours. The unmanned war planes and drones used extensively in Iraq, Syria and Afghanistan rely on internet communications and automated weapon release … If a hacker diverts the route or destination of one of these drones, the consequences will be far worse than the current situation, where mistakes by automated measuring devices and misinterpretation of intelligence gathered by satellites causes ‘collateral damage’ (civilians mistaken for, say, jihadists).

States are therefore obsessed with cybersecurity, especially at a time when we know (thanks to Wikileaks) that the NSA gathers as much information about the USA’s European ‘allies’ as it does about ‘rogue states’. In the current situation a number of states have the capability of infiltrating and if necessary attacking other countries’ websites and internet infrastructure. Worldwide there are only 13 root DNS servers - the address books of the internet - which could each shut down a big chunk of the web.

The traditional military and nuclear industry safety rule is to make sure sensitive servers and controllers are not connected to the internet. However, as the Stuxnet attack on Iran’s Natanz nuclear installation showed, that in itself is no protection - the virus which infected and compromised computers at Iran’s nuclear plants was introduced on a USB stick!

yassamine.mather@weeklyworker.co.uk

Notes

1. Financial Times October 23 2015.

2. www.relevategroup.com/bigdata.

3. The Guardian August 14 2015.

4. www.upenn.edu/pennnews/news/penn-americans-give-personal-data-discounts-because-they-believe-marketers-will-get-it-anyway.

5. www.facebook.com/notes/facebook-security/notifications-for-targeted-attacks/10153092994615766.