Death to “link rot”: here’s where the Internet goes to live forever (Fast Company, 28 March 2014) – The phrase “link rot” probably summons many images for you–none of them good. And while clicking on a dead link isn’t quite as physically unpleasant as, say, touching a piece of slimy, disintegrating wood, bad links are weakening the web as surely as bad beams can compromise a building. When websites disappear or change, any piece of work–be it a blog post, book, or scholarly dissertation–that linked to those resources no longer makes quite as much sense. And some of these now-moldering links are structurally important to the fragile, enduring edifice of human knowledge: in fact, according to one recent study , half of the links in Supreme Court decisions either lead to pages with substantially altered content or no longer go anywhere, at all. In the face of this decay, the authors of that paper, the legal scholars Jonathan Zittrain, Kendra Albert, and Lawrence Lessig, floated one possible fix: create “a caching solution” that would help worthy links last forever. Now, this idea is being in practice by Perma.cc, a startup based out of the Harvard Law Library. Old-school institutions like law school libraries, it turns out, may be perfectly positioned to fight against the new-school problem of link rot. Libraries, after all, are “really good at archiving things,” as Perma’s lead developer, Matt Phillips, puts it. “We have quite a history of storing things safely that are important to people for a really long time,” says Phillips, a member of Harvard’s Library Innovation Lab. “It’s a failure if we’re not preserving what’s being created online.” To start with, Perma.cc’s small team of developers, librarians, and lawyers has designed an archiving tool that’s as easy to use as any link shortener. Stick in a link, and you’ll get a new Perma-link–along with an archive of all the information on the page that link leads to. Anyone can sign up as a user, and create links with a shelf life of two years, with an option to renew. A select group of users, though, can “vest” links–committing Perma.cc to store their contents indefinitely. Since launching last fall, the project has grown rapidly, signing up a couple thousand users and recruiting 45 libraries and dozens of law journals as partners. But only a fourth of Perma.cc’s users–472 “vesting members” and 113 “vesting managers,” at current count–have the power to grant links immortality (or as close to it as Perma.cc can manage). “The problem is, in practice, it’s a very serious commitment to say this will be kept forever,” says Jack Cushman, who started contributing to Perma.cc as volunteer, before joining formally as a Harvard Law School Library fellow. “It’s not something that we can promise to everyone in the world to begin with.”

Provided by MIRLN.

Image courtesy of FreeDigitalPhotos.net/Stuart Miles.

Cloud-based e-discovery can mean big savings for smaller firms (ABA Journal, 26 March 2014) – Smaller law firms may be able to save a significant amount of money by ‘renting’ e-discovery applications in the cloud rather than bringing a full-fledged hardware and software solution in-house. “Only a few years ago, e-discovery in the cloud wasn’t even available,” said Gareth Evans, an Irvine, Calif.-based partner at Gibson, Dunn & Crutcher, adding that these days, even the smallest law firms have a wide variety of e-discovery firms they can source. Evans spoke as part of a panel at LegalTech New York 2014 in February. Panelist Alan Winchester, a partner at the New York City firm Harris Beach, agreed: “For firms without robust IT departments, it grants them the experts to manage the technology operations and security.” While renting e-discovery services a sliver at a time may cause some firms to worry about the security of their data offsite, the panelists advised that with a good contract, those concerns can be minimized. [ Polley : Interesting story that sounds about right. This might just be a first step.]

Provided by MIRLN.

Image courtesy of FreeDigitalPhotos.net/atibodyphoto.

Can you sue a robot for defamation? (Ryan Calo at Forbes, 17 March 2014) – Life moves pretty fast. Especially for journalists. When an earthquake aftershock shakes America’s second largest city, news outlets scramble to be the first to cover the story. Today the news itself made news when various outlets picked up on a curious byline over at the Los Angeles Times : “this post was created by an algorithm written by the author.” The rise of algorithmically generated content is a great example of a growing reliance on “emergence.” Steven Johnson in his book by this title sees the essence of emergence as the movement of low-level rules to tasks of apparently high sophistication. Johnson gives a number of examples, from insects to software programs. As I see it, the text of the earthquake story likewise “emerged” from a set of simple rules and inputs; the “author” in question at the Los Angeles Times, Ken Schwencke, did not simply write the story in advance and cut and paste it. I imagine Schwencke had a pretty good sense of what story the algorithm would come up with were there an earthquake. This is not always the case. Even simple algorithms can create wildly unforeseeable and unwanted results. Thus, for instance, a bidding war between two algorithms led to a $23.6 million dollar book listing on Amazon. And who can forget the sudden “flash crash” of the market caused by high speed trading algorithms in 2010. I explore the challenges emergence can pose for law in my draft article Robotics and the New Cyberlaw . I hope you read it and let me know what you think. I’ll give you one example: Imagine that Schwencke’s algorithm covered arrests instead of earthquakes and his program “created” a story suggesting a politician had been arrested when in fact she had not been. Can the politician sue Schwencke for defamation? Recall that, in order to overcome the First Amendment, the politician would have to show “actual malice” on the part of the defendant. Which is missing. But, in that case, are we left with a victim with no perpetrator? If this seems far fetched, recall that Stephen Colbert’s algorithm @RealHumanPraise -which combines the names of Fox News anchors and shows with movie reviews on Rotten Tomatoes-periodically refers to Sarah Palin as “ a party girl for the ages ” or has her “ wandering the nighttime streets trying to find her lover .” To the initiated, this is obviously satire. But one could readily imagine an autonomously generated statement that, were it said by a human, would be libelper se .

Provided by MIRLN.

Image courtesy of FreeDigitalPhotos.net/Stuart Miles.

Google catches French govt spoofing its domain certificates (ZDnet, 9 Dec 2013) – France’s cyberdefence division, Agence nationale de la sécurité des systèmes d’information (ANSSI), has been detected creating unauthorised digital certificates for several Google domains. Google states on its own security blog that an intermediate certificate authority (CA) issued the certificate, which links back to ANSSI. “Intermediate CA certificates carry the full authority of the CA, so anyone who has one can use it to create a certificate for any website they wish to impersonate,” Google wrote. In a statement by ANSSI, the cyberdefence organisation revealed that this intermediate CA is actually its own infrastructure management trust administration, or “L’infrastructure de gestion de la confiance de l’administration” (IGC/A). ANSSI itself is the cyber response and detection division of the French republic. ANSSI states that the fraudulent certificates were a result of “human error, which was made during a process aimed at strengthening overall IT security”. Google states that the certificate was used in a commercial device, on a private network, to inspect encrypted traffic. According to the web giant, users on that network were aware that this was occurring, but the practice was in violation of ANSSI’s procedures. Google used the incident to highlight the need for its Certificate Transparency project, aimed at fixing flaws in the SSL certificate system that could result in man-in-the-middle attacks and website spoofing. Google’s answer to these flaws is for CAs to adopt a framework that monitors and audits these certificates, thus outing rogue CAs or when certificates are illegitimately issued. This is not the first time that the flaws of SSL certificates have been exposed. The US National Security Agency is alleged to have used man-in-the-middle attacks through unauthorised certificates against Google in the past. Additionally, in August 2011, abreach at DigiNotar, another CA, found that an Iranian hacker had created rogue certificates for Google domains, intercepting user passwords for Gmail.

Provided by MIRLN.

Image courtesy of FreeDigitalPhotos.net/Vichaya.

 

Scientists used Facebook for the largest ever study of language and personality – and the results are groundbreaking (Business Insider, 2 Oct 2013) – A group of University of Pennsylvania researchers who analyzed Facebook status updates of 75,000 volunteers have found an entirely different way to analyze human personality, according to a new study published in PLOS One. The volunteers completed a common personality questionnaire through a Facebook application and made their Facebook status updates available so that researchers could find linguistic patterns in their posts. Drawing from more than 700 million words, phrases, and topics, the researchers built computer models that predicted the individuals’ age, gender, and their responses on the personality questionnaires with surprising accuracy. The “open-vocabulary approach” of analyzing all words was shown to be equally predictive (and in some cases more so) than traditional methods used by psychologists, such as self-reported surveys and questionnaires, that use a predetermined set of words to analyze. Basically, it’s big data meets psychology. The Penn researchers also created word clouds that “provide an unprecedented window into the psychological world of people with a given trait,” graduate student Johannes Eichstaedt, who worked on the project, said in a press release. “Many things seem obvious after the fact and each item makes sense, but would you have thought of them all, or even most of them?” [ Polley : story includes some pretty fascinating word-clouds; this looks like quite an interesting study.]

Provided by MIRLN.

Photo courtesy of Renjith Krishnan/FreeDigitalPhotos.net

The latest backlash of the NSA spying scandal may not be directed squarely at the U.S. government, but at U.S. businesses.  President Rousseff of Brazil is proposing legislation which would require data generated within the country to also be stored on servers within the country.  What kind of data and exactly how this would work given the breadth and complexity of identifying where data originates from in our ever interconnected world is not yet clear.

As this article in Bloomberg points out, Latin Americans have long been suspicious of U.S. spying activities in the continent.  However, Brazil would not be the first country to make such a requirement on technology companies.  Currently, European countries require personal sensitive data to be stored on servers in-country.  Technology advocates cite slower traffic speeds and increased potential problems with the proposed legislation.  Requiring companies to house servers domestically may also result in protectionist measures meant to bolster local technology industries, and perhaps even trade disputes.