With the ISIS attack in Paris still fresh in everyone’s minds, many concerns are being raised about data surveillance laws. Even though there has not been any evidence that the terrorist attacks involved the use of encrypted data, some supporters of expanding data surveillance are citing the attacks as proof that wider-ranging laws are needed. This is nothing new; the ongoing battle between privacy proponents and lawmakers supporting more surveillance is thrust into the spotlight increasingly often. Disagreements over data encryption will likely only increase, with 75% of internet interactions expected to be encrypted in the next ten to fifteen years. And while supporters of internet and data privacy have no problem with this rise in data encryption, it will cause technical problems for government agencies and law officials who need to access information to bring criminals and terrorists to justice.

A compromise has been suggested: some officials have proposed instituting laws that require tech companies to develop methods for police to obtain access to encrypted information, although this may not even be possible. Some companies such as Apple and Google cannot even access data encrypted in their own devices and services. Even if it is possible, the White House has agreed to not move forward with any legislation that would require companies to make encrypted data available whenever the police needed.

Finding a balance between protecting users’ privacy online and surveillance in the name of preserving law and order is an ongoing process and should not be determined quickly in the wake of a crisis. While there should be legal limits on the seizure of encrypted data, there must also be limits on how and what is encrypted. Determining these limits will take time.

Article via The Washington PostNovember 18, 2015

Photo: Point Cloud Data via Daniel V [Creative Commons Attribution-NonCommercial-NoDerivs]

Virtually all industries are being affected by the complexities of cybersecurity and privacy law. In addition to being somewhat confusing, aspects of cybersecurity and privacy law can change practically overnight. With this in mind, the international law firm Akerman now offers a constantly updating web-based legal knowledge platform on cybersecurity and privacy law named the Akerman Data Law Center. Developed in conjunction with Thomson Reuters and Neota Logic, the platform makes the international rules and regulations regarding cybersecurity more accessible. This tool will be useful for law firms everywhere, since cybersecurity and privacy are “likely to have accelerated growth in the law market for 2016,” as explained by Akerman’s Data Law Practice co-chair, Martin Tully. In addition to always being up-to-date, the platform can be used to research changes that only pertain to specific regions or industries. This could be extremely useful to law firms that operate in several jurisdictions and want to be able to keep track of the differences in regulations between regions.

Though access to the research compiled in the Akerman Data Law Center will require a subscription fee, Akerman states that the platform will save users up to 80% on research costs. When compared to the number of hours associates could spend accumulating the research already available within the platform, the Akerman Data Law Center is more efficient and less expensive. To make the platform even more user friendly, Akerman even allows users to contact them directly for particularly challenging questions, which will prove useful for firms that do not have the funds to consult with experts constantly.

Article via Legaltech NewsNovember 20, 2015

Photo: Chained and locked via Vivek [Creative Commons Attribution-NonCommercial-NoDerivs]

 

On Feb. 4, 2010 Maria Nucci sued Target for the injury she sustained while working at the store. However, when Target requested access to her social media account, Nucci objected. As a result, 36 photos were deleted two days later. However, the Fourth District Court of Appeals for the State of Florida granted Target’s motion with respect to all photographs on the Facebook page that included Nucci. She argued she had a right to privacy, but the judges used that very argument against her.

“Because ‘information that an individual shares through social networking websites like Facebook may be copied and disseminated by another,’ the expectation that such information is private, in the traditional sense of the word, is not a reasonable one,” the panel ruled, partially quoting another Florida case. It also added, “Before the right to privacy attaches, there must exist a legitimate expectation of privacy.”

Using social media in court cases continues to skyrocket. It has been used about 80% of the time. According to John Facciola, the information has to be collected. Second, they have to sorted out into what the attorney needs and does not need. Courts are still trying to figure out what to do with social media in discovery and the privacy rights of those whose profiles are in question. This past year, the arguing has centered on two main issues: authentication, and where the expectation of privacy stops.

Social media is notorious for one particular thing: you don’t have to be who you say you are online. This is demonstrated in parody Twitter accounts and multiple Linkedin profiles. State courts have different standards on the authentication of social media. For example, the Maryland standard is that “the judge had to be ‘convinced’ that a social media post wasn’t falsified or created by another user. On the other hand, the Texas approach stipulated that any evidence could be used “as long as the proponent of the evidence can demonstrate to the judge that a jury can reasonably find that evidence to be authentic.” In United States vs. Vayner, Aliaksandr Zhyltsou accused Vladyslav Timku of providing a forged birth certificate for an imaginary infant daughter to avoid compulsory military service in Ukraine. The key piece of evidence was in the defendant’s social media account. However, the federal agent could not provide authenticity. As a result, Maryland revisited their standard and deemed that the judge has to identify which evidence would be sufficient. In other words, the judge has to determine that “there is proof from which a reasonable juror could find that the evidence is what the proponent is claiming.”

Article via Legaltech News , November 2, 2015

Photo: Affiliated Network for Social Accountability- Arab World via World Bank Photo Collection [Creative Commons Attribution-NonCommercial-NoDerivs]

The US Senate voted this past Tuesday to pass the Cybersecurity Information Sharing Act (CISA), which allows companies to share evidence of cyberattacks with the US government, even if that data includes the personal information of individuals.

Those in favor of the bill argue that CISA will help the government protect companies. Most big tech companies comprise the opposition, and say that the new act is another loophole that allows the US government to snoop on citizens. President Obama supports CISA.

Al Franken, a senator from Minnesota and one of 21 who voted against the bill, said in a statement following CISA’s passing, “There is a pressing need for meaningful, effective cybersecurity legislation that balances privacy and security. This bill doesn’t do that.”

Companies are supposed to remove personal information about customers—such as emails and text messages—before sending data to the government. Currently, however, no accountability system exists to ensure that personal identifiers are in fact deleted before reaching government databases.

CISA was most likely passed in response to recent high-profile hackings, such as those committed against Sony Pictures, Ashley Madison, and United Airlines.

“With security breaches like T-Mobile, Target, and [the US government’s Office of Personnel Management] becoming the norm, Congress knows it needs to do something about cybersecurity,” said Mark Jaycox, Legislative Analyst of the Electronic Frontier Foundation. “It chose to do the wrong thing.”

Article via CNET, October 27, 2015

Photo: The Capitol, in Washington, D.C. US Senate and The House of Representatives via DeusXFlorida [Creative Commons Attribution-NonCommercial-NoDerivs]

On Tuesday, October 27, the US Senate voted to pass the Cybersecurity Information Sharing Act.

This bill allows companies to share evidence of cyber-attacks to the US government even if it violates a person’s privacy. Supporters say this act will make it easier for the government to monitor threats and responses across companies. Others like Apple and other top tech companies argued that this bill could give government more liberty to spy on US citizens.

US Chamber of Commerce President and CEO Thomas Donohue said this legislation is a “positive step toward enhancing our nation’s cybersecurity.”

21 Senators voted against the act. Among them was Minnesota Democrat Al Franken who believes there is a need for “effective legislation that balances security and privacy” and “the CISA does not do that.”

Just last year, the CISA was first introduced and passed by the House but it did not go through the Senate. High profile cyberattacks on companies like Sony Pictures, United Airlines, and Ashley Madison may have prompted the Senate to approve it this time around.

The issue at hand is that personal identifiers such as text messages and e-mails may slip through when sending information to law enforcement and intelligence agencies, even though companies are supposed to delete that information.

US Department of Homeland Security acknowledged that the bill does raise “privacy and civil liberty concerns.”

CISA is now going to a Congressional Conference whose members must match the passed Senate and House bills before sending it to President Obama.

Article via CNET Security News , October 27, 2015

Photo: Washington DC – Capitol Hill: United States Capitol via Wally Gobetz [Creative Commons Attribution-NonCommercial-NoDerivs]

In a recent ruling, the European Court of Justice struck down Safe Harbor, which dictated the rules for transatlantic data flow between the United States and the European Union. The invalidation of Safe Harbor carries significant consequence for American e-commerce firms who operate in Europe. Companies like Google and Facebook—as well as the U.S. administration—now must make high-profile decisions in response to the ruling.

Europe has broad legislation protecting the personal information of E.U. citizens from being exploited by businesses. The U.S., in contrast, only codifies privacy against government institutions and for certain high-sensitivity data (e.g. health records, etc.) Safe Harbor’s “principles” are more flexible extensions of the E.U.’s privacy laws; violations of Safe Harbor could result in sanctions from a self-regulatory organization or the Federal trade Commission.

When Europe’s highest court invalidated the agreement, it was under the premise that European citizens were being manipulated by U.S. tech companies as well as by the U.S. government. The ruling was a reflection of a recent decision made by an Irish court on Safe Harbor’s illegality. Any new agreement drafted will have to contain more stringent privacy rules, and will therefore create more limitations for U.S. firms.

Facebook and Google’s immediate options include continuing business practices in a time of legal uncertainty, shutting down their European operations (resulting in major loss), or changing the business model to include more data collection centers in Europe. The last alternative would require companies to keep European and American data completely separate, with the consequence of economic inefficiency.

Article via The Washington Post, 6 October 2015

Photo: Bandiera dell’Unione (EU Flag) via Giampaolo Squarcina [Creative Commons Attribution-NonCommercial-NoDerivs]