Green Line Solutions News

Return to Net Neutrality - What to know about the current fight.

Nate Holton - Sunday, March 10, 2019

Give Em The Finger

Thomas Topp - Tuesday, March 07, 2017

Give ‘Em the Finger

The Use of Biometrics in Court Cases


Biometrics -- distinctive and quantifiable traits utilized for identifying an individual -- have been used in the form of fingerprint locks on smartphones since 2011. However, it wasn’t until 2013, when the iPhone’s Touch ID was released on the 5S, that the technology gained popular momentum.

Using one’s fingerprints to lock smartphones or sensitive content is appealing because everyone’s fingerprints are unique and therefore believed to be a personalized passcode that cannot be hacked. It also saves the user from forgetting the password or having to type anything on the screen. However, in May of 2016, Russell Brandom, contributor to The Verge, posted an article describing many of the pitfalls of this technology.

One of the biggest problems with using fingerprint biometrics is that vast databases store fingerprint information for government and law enforcement agencies, and not just on criminals. As Brandom writes:


Homeland Security policy is to collect fingerprints from non-US citizens between the age of 14 and 79 as they enter the country, along with a growing number of fingerprints taken from undocumented immigrants apprehended by Customs and Border Patrol. The FBI maintains a separate IAFIS database with over 100 million fingerprint records, including 34 million "civil prints" that are not tied to a criminal file.


By using information from these databases a mold of an individual’s fingerprints can be created with ease. However, using a left behind print, dental cast and mold the same thing can be accomplished in an unofficial manner. Perhaps the most condemning aspect of using biometrics is that they are permanent. Unlike a password that can be changed if it’s hacked, once a theft of biometric data has happened there is no option for such a recourse.

Recently, in the case of The State of Minnesota v. Matthew Vaughn Diamond, a legal precedent was set by the Court of Appeals. The case originally began with a burglary in October of 2014, after which Diamond’s phone was seized but not able to be unlocked. He was eventually ordered to provide his fingerprint to unlock his phone and the evidence gathered resulted in a 51-month sentence.

In the appeal case, which was decided on in January of this year, the court ruled that such an order was not a violation of the defendant’s constitutional rights, citing that police have the authority to gather blood, hair, urine, handwriting and fingerprint samples even against that person’s wishes.

The Judge, Tracy Smith, wrote that the order to provide a fingerprint for the purposes of unlocking a personal device does not violate a person’s privilege against self-incrimination, nor is doing so comparable to being made to testify against oneself in court. The differentiation can be seen clearly when the former is thought of as a confirmation of who you are, and the latter as a confirmation of what you know.

Surprisingly, U.S. Magistrate Judge David Weisman, a federal judge in Chicago, denied the FBI’s request for a warrant mandating the use of suspect’s fingerprints to unlock their smart devices. While the case is actually concerning charges related to child pornography, evidence suspected of being on the cell phones could help convict.

In an article from The Chicago Tribune, the ruling was described as “narrow in scope” but did provide an important blow to “ federal agencies looking for sweeping powers to search individuals' cellphones without probable cause,” as stated by Jennifer Lynch, a senior staff attorney at the nonprofit digital rights group Electronic Frontier Foundation.

As biometric technologies become more and more ubiquitous the issue spreads from those involved with illicit activities to law-abiding citizens by increasing the amount and types of data samples that can be collected by law enforcement without express permission from a judge.


 More...

Virtual Reality Series Part IV

Thomas Topp - Wednesday, February 15, 2017

This article is the fourth and final in a short series about virtual reality (VR) and augmented reality (AR); having discussed the origins of the concepts and the applications of the technologies, and contemporary devices, this article will focus on adopting AR as a lifestyle.


Virtual Reality Series

Part IV: The Evolution of Mann


Steve Mann -- Professor at the University of Toronto’s Department of Electrical and Computer Engineering and Chief Scientist of the Rotman School of Management's Computer Design Lab -- is known as the “Father of AR” for good reason: he has been living in what he calls “computer-mediated reality” for over thirty-five years.

Mann’s current HMD, or what he refers to as “computerized eyewear,” is known as EyeTap Generation 4 and is physically attached to his skull, such that special tools are required for its removal. Because of this, Mann has been called "the world's first cyborg" by the Canadian press, though he himself dismisses the term as too vague.

Mann -- who has a doctorate in Media Arts from MIT, a Bachelor’s of Science degree, as well as both a Bachelor’s and Master’s degree of Engineering -- is Founder and Director at both the FL_UI_D Laboratory and the EyeTap Personal Imaging Lab. On the FL_UI_D website, Mann describes the group as one that “designs, invents, builds and uses wearable computers and digital prosthesis in ordinary day-to-day settings.”

Mann explains why he prefers the term “mediated reality” in his article published by Spectrum IEEE,

My computerized eyewear can augment the dark portions of what I’m viewing while diminishing the amount of light in the bright areas … For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me... I say that it provides a “mediated” version of reality.

While the “less interesting” AR is described as “the overlay of text or graphics on top of your normal vision.” Mann points out that this often makes eyesight worse, not better, by “obscuring your view with a lot of visual clutter.”

Mann believes that once a person has experienced day-to-day life with computerized eyewear, they’ll understand the numerous advantages it grants and will be reticent to give up new abilities. For example, Mann explains that his EyeTap includes an infrared camera capable of detecting subtle heat signatures, which allows him to “see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off.” Additionally, the EyeTap can enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in foreign languages.

In 2013, Google released its own version of the EyeTap, called Google Glass. The prototype was the first widely-known and commercialized computerized eyewear, though its development followed more than a decade after Mann’s first Generation EyeTap. Despite the many strides made by Mann over thirty years, Google Glass failed to incorporate several features that reduce eyestrain in the wearer.

Mann is expanding his influence and spreading his knowledge of wearable technology through work with his companies and with the IEEE (Institute for Electrical and Electronics Engineers). Decades ahead of the curve, Mann’s innovations continue to break down the barriers between man and machine.



 More...