In the early hours of March 15th, the U.S. Health and Human Services Department’s servers were hit with a flood of malicious traffic designed to slow or shut them down. The distributed denial of service (DDoS) attack came as the country battles the coronavirus outbreak, one of several factors that led the agency to take precautions and bolster its IT infrastructure. Although the attack failed to significantly slow the department’s systems, officials in the administration suspected it of having been carried out by a “hostile foreign actor.”
It’s not a coincidence that such attacks are occurring with the COVID-19 pandemic in full swing, in a time of heightened fear and anxiety. Hospitals and other healthcare organizations are often targeted and vulnerable to such attacks. IT systems and sites are already experiencing high traffic due to the virus, and cyberattacks could tip it over entirely, according to Roderick Jones, the founder of Rubica, a cybersecurity company.
In matters of health and personal security, a common type of attack among cyber-criminals is ransomware, a type of malware that prevents users from accessing their system or files until a ransom is paid out. U.S. Attorney Scott Brady warned of an “unprecedented” wave of attacks and scams related to hackers trying to capitalize on fears of the novel coronavirus, known as SARS-CoV-2. In mid-March, it was uncovered that a strain of Android malware allowed criminals to spy on mobile users through their camera or microphone when they downloaded a coronavirus map purporting to track the rate of infections and casualties.
More broadly, companies are finding the need to improve their security posture across the board, lest they be on the receiving end of cyberattacks; from the Financial Times:
On Friday the Cybersecurity and Infrastructure Security Agency, the Department of Homeland Security’s cyber arm, issued an alert urging companies to “adopt a heightened state of cyber security“ when implementing remote working, as more workers are asked to telecommute.
The agency said “more vulnerabilities are being found and targeted by malicious cyber actors” as workers increasingly rely on “virtual private networks,” or VPNs, and added that cyber actors could also “increase phishing emails targeting teleworkers to steal their usernames and passwords.”
VPNs were originally developed to allow employees working outside of the office to access company files and applications, but have since been used by people for personal use to increase security while on a public network. With coronavirus reshaping the nature of work (however temporarily), such tools are becoming more important. As a result of this shift, the Cybersecurity and Infrastructure Security Agency (CISA) outlined several key considerations for anyone setting up a remote work environment:
The following are cybersecurity considerations regarding telework.
• As organizations use VPNs for telework, more vulnerabilities are being found and targeted by malicious cyber actors.
• As VPNs are 24/7, organizations are less likely to keep them updated with the latest security updates and patches.
• Malicious cyber actors may increase phishing emails targeting teleworkers to steal their usernames and passwords.
• Organizations that do not use multi-factor authentication (MFA) for remote access are more susceptible to phishing attacks.
• Organizations may have a limited number of VPN connections, after which point no other employee can telework. With decreased availability, critical business operations may suffer, including IT security personnel’s ability to perform cybersecurity tasks.
Most of the mitigating solutions recommended by CISA are intuitive: keeping VPNs and devices up-to-date or warning employees to expect phishing attempts are standard due diligence. Major VPN providers may not take the time to patch security vulnerabilities. This can be exacerbated by the use of home WiFi, which does not often have the defenses of a corporate network.
But other recommendations, like implementing log review, attack detection mechanisms, and rate limiting solutions, are just as important to maintaining IT security at scale. A study done by Barracuda Networks, a cybersecurity company, found that coronavirus-related phishing attacks have skyrocketed since the end of February, increasing by 667%. VPNs have seen a similar surge; user data from Atlas VPN indicates that broader usage of these networks has grown in tandem with the increase of SARS-CoV-2 cases.
In spite of this, not many employees rave over their corporate VPNs. A single infected device or malicious user could pose a huge security threat to the integrity of a private network. Cloudflare (disclosure: also my employer) has built a service that secures access to internal applications without a VPN, integrates with multiple identity providers simultaneously, and audits logins or policy changes. This gives organizations the tools to circumvent cyberattacks they would otherwise be susceptible to, including phishing, SQL injections, and MITM attacks.
Amid the COVID-19 pandemic, more tools are being deployed across large enterprises in accordance with the notion of zero trust security. This approach dispels the traditional “castle-and-moat” understanding of IT network security, where everyone inside a network is trusted by default. In a zero trust environment, cyber attackers are assumed to exist both inside and outside the network, and access is only granted to users based on the areas in which they should operate. As a result, each request has to prove itself through strict identity verification; from Stratechery:
In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications. This model solves all of the issues inherent to a castle-and-moat approach:
• If there is no internal network, there is no longer the concept of an outside intruder, or remote worker.
• Individual-based authentication scales on the user side across devices and on the application side across on-premise resources, SaaS applications, or the public cloud (particularly when implemented with single-sign on services like Okta or Azure Active Directory).
In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.
And it’s not only scalable for users, but also more secure for enterprises. In a session on password dependencies at the RSA Conference, an annual IT security conference in San Francisco, engineers from Microsoft claimed that virtually all compromised accounts were not using multi-factor authentication (MFA), thereby failing to stop automated account attacks. Of highly sensitive accounts on the enterprise side, only 11% had multi-factor implemented, as of January 2020. In light of how COVID-19 has fundamentally reshaped the workforce, however temporarily, organizations are implementing similar tools across the board to maintain a zero trust environment.
Governments have dealt with the COVID-19 crisis in a different way. Efforts to track and monitor the pandemic with data gathering tools have led to concerns around the slow erosion of civil liberties. Hong Kong officials, for instance, recently started distributing bracelets to visitors from overseas that alerts authorities when they leave their quarantine location. The bracelets contain a QR code, which pairs them up with a smartphone app to check whether a quarantined person has observed self-isolation.
The move is an effective one in several ways, for example by helping medical authorities trace their contact history. But some worry that these tactics show potential for abuse in the long run, and are coercive, instead of persuasive, measures. Hong Kong authorities could leverage the same tracking tools, for instance, when identifying whether someone had participated in an anti-government protest. Jack Thornhill, the Innovation Editor at the Financial Times, has argued that eroding social trust could be an unintended casualty of the pandemic response.
While the expansion of executive power is a common countermeasure in a national emergency, it might be more justified in extenuating circumstances. Charles Fried, a Professor of Law at Harvard Business School, has referred to the coronavirus pandemic as a “black swan event” with no modern precedent, and argues that restrictions on individual liberty are appropriate; from the Harvard Gazette:
Most people are worrying about restrictions on meetings — that’s freedom of association. And about being made to stay in one place, which I suppose is a restriction on liberty. But none of those liberties is absolute; they can all be abrogated for compelling grounds. And in this case the compelling ground is the public health emergency.
Fried insists on distinguishing COVID-19 from national emergencies like 9/11, arguing that the pandemic is more “widely dispersed” and unpredictable. This would, he claims, justify more draconian measures — perhaps short of policing disinformation online, which Fried says would be hard to enforce.
Another tool governments have used to track the pandemic is contact tracing, or the process of identifying infected persons, listing those they have come in contact with, and following-up with contacts to monitor symptoms. Contact tracing data can come from the bottom up, with mobile devices providing data to each other; for example, infectious disease experts from the University of Oxford have been working with European governments to support the feasibility of a mobile app that would identify infected people and recent person-to-person contacts.
But it can also be a top down process, such as when states seize data from platforms directly. Israel’s government on March 16th authorized the internal security service, Shin Bet, and the authorities to track and access the mobile phones of infected individuals. The technology had been primarily developed for counterterrorism purposes.
The private sector is getting involved, to a point. Google is in talks with the U.S. government on potential efforts to share data that would show patterns of user movements to track the spread of coronavirus. Facebook had already been sharing datasets with its Disease Prevention Maps, which provide international agencies, universities, and researchers with an understanding of where people live, their movement patterns, and the strength of their cellular connectivity. The stated objective of these maps is in “reaching vulnerable communities most effectively and in better understanding the pathways of disease outbreaks that are spread by human-to-human contact.” One such example of a Hong Kong map, tracking user movement in gold and known SARS-CoV-2 cases in pink, is reproduced below.
While the application of flow modelling is still in the early stages, it does not raise the same level of ethical concerns as using location data for contact tracing. According to Google, the collection mechanisms built into Android or Google Maps were “not designed to provide robust records for medical purposes,” largely for privacy and security considerations.
In a blog post on March 10th, experts at the Electronic Frontier Foundation cited instances in which data tools and monitoring measures would be required to ensure the protection of the broader public, while making it clear these are unusual times:
In the digital world as in the physical world, public policy must reflect a balance between collective good and civil liberties in order to protect the health and safety of our society from communicable disease outbreaks. It is important, however, that any extraordinary measures used to manage a specific crisis must not become permanent fixtures in the landscape of government intrusions into daily life. There is historical precedent for life-saving programs such as these, and their intrusions on digital liberties, to outlive their urgency.
Thus, any data collection and digital monitoring of potential carriers of COVID-19 should take into consideration and commit to these principles:
• Privacy intrusions must be necessary and proportionate. A program that collects, en masse, identifiable information about people must be scientifically justified and deemed necessary by public health experts for the purposes of containment. And that data processing must be proportionate to the need. For example, maintenance of 10 years of travel history of all people would not be proportionate to the need to contain a disease like COVID-19, which has a two-week incubation period.
• Data collection based on science, not bias. Given the global scope of communicable diseases, there is historical precedent for improper government containment efforts driven by bias on nationality, ethnicity, religion, and race – rather than facts about a particular individual’s actual likelihood of contracting the virus, such as their travel history or contact with potentially infected people. Today, we must ensure that any automated data systems used to contain COVID-19 do not erroneously identify members of specific demographic groups as particularly susceptible to infection.
• Expiration. As in other major emergencies in the past, there is a hazard that the data surveillance infrastructure we build to contain COVID-19 may long outlive the crisis it was intended to address. The government and its corporate cooperators must roll back any invasive programs created in the name of public health after crisis has been contained.
• Transparency. Any government use of “big data” to track virus spread must be clearly and quickly explained to the public. This includes publication of detailed information about the information being gathered, the retention period for the information, the tools used to process that information, the ways these tools guide public health decisions, and whether these tools have had any positive or negative outcomes.
• Due process. If the government seeks to limit a person’s rights based on this “big data” surveillance (for example, to quarantine them based on the system’s conclusions about their relationships or travel), then the person must have the opportunity to timely and fairly challenge these conclusions and limits.
These principles illustrate that while digital monitoring of the pandemic is not in and of itself a risk to civil liberties, the same cannot be said for specific methods of collecting data. In an effort to understand these risks, The Economist charted out the different types of data tools used for monitoring and their application.
While not exhaustive, the list provides insight into the risks outlined by EFF. Measures to gather data over an extended period of time (beyond that which is required) serve no medical purpose. Institutions should impose checks on programs created to deal with COVID-19 once the virus is contained, once the ends are achieved and no longer justify the means. The disparity across all systems of government may render certain tools more dangerous in the hands of illiberal states. For example, while Singapore has been lauded for its stringent approach to the crisis and rollout of its contact tracing app, TraceTogether, this teeters on the edge of high-tech surveillance. The country’s health ministry can decrypt and analyse the app’s logs when it is deemed necessary, simplifying user identification.
The spread of coronavirus might represent a “black swan” event, as Charles Fried puts it. But the trade-offs we make to combat the pandemic should not be taken for granted. Measures like restrictions on cross-border movement, location-tracking, or sharing of private data between healthcare groups and government agencies are becoming the new normal. It’s important to ensure their use doesn’t outlive their relevance.