Green Line Solutions News

Give Em The Finger

Thomas Topp - Tuesday, March 07, 2017

Give ‘Em the Finger

The Use of Biometrics in Court Cases


Biometrics -- distinctive and quantifiable traits utilized for identifying an individual -- have been used in the form of fingerprint locks on smartphones since 2011. However, it wasn’t until 2013, when the iPhone’s Touch ID was released on the 5S, that the technology gained popular momentum.

Using one’s fingerprints to lock smartphones or sensitive content is appealing because everyone’s fingerprints are unique and therefore believed to be a personalized passcode that cannot be hacked. It also saves the user from forgetting the password or having to type anything on the screen. However, in May of 2016, Russell Brandom, contributor to The Verge, posted an article describing many of the pitfalls of this technology.

One of the biggest problems with using fingerprint biometrics is that vast databases store fingerprint information for government and law enforcement agencies, and not just on criminals. As Brandom writes:


Homeland Security policy is to collect fingerprints from non-US citizens between the age of 14 and 79 as they enter the country, along with a growing number of fingerprints taken from undocumented immigrants apprehended by Customs and Border Patrol. The FBI maintains a separate IAFIS database with over 100 million fingerprint records, including 34 million "civil prints" that are not tied to a criminal file.


By using information from these databases a mold of an individual’s fingerprints can be created with ease. However, using a left behind print, dental cast and mold the same thing can be accomplished in an unofficial manner. Perhaps the most condemning aspect of using biometrics is that they are permanent. Unlike a password that can be changed if it’s hacked, once a theft of biometric data has happened there is no option for such a recourse.

Recently, in the case of The State of Minnesota v. Matthew Vaughn Diamond, a legal precedent was set by the Court of Appeals. The case originally began with a burglary in October of 2014, after which Diamond’s phone was seized but not able to be unlocked. He was eventually ordered to provide his fingerprint to unlock his phone and the evidence gathered resulted in a 51-month sentence.

In the appeal case, which was decided on in January of this year, the court ruled that such an order was not a violation of the defendant’s constitutional rights, citing that police have the authority to gather blood, hair, urine, handwriting and fingerprint samples even against that person’s wishes.

The Judge, Tracy Smith, wrote that the order to provide a fingerprint for the purposes of unlocking a personal device does not violate a person’s privilege against self-incrimination, nor is doing so comparable to being made to testify against oneself in court. The differentiation can be seen clearly when the former is thought of as a confirmation of who you are, and the latter as a confirmation of what you know.

Surprisingly, U.S. Magistrate Judge David Weisman, a federal judge in Chicago, denied the FBI’s request for a warrant mandating the use of suspect’s fingerprints to unlock their smart devices. While the case is actually concerning charges related to child pornography, evidence suspected of being on the cell phones could help convict.

In an article from The Chicago Tribune, the ruling was described as “narrow in scope” but did provide an important blow to “ federal agencies looking for sweeping powers to search individuals' cellphones without probable cause,” as stated by Jennifer Lynch, a senior staff attorney at the nonprofit digital rights group Electronic Frontier Foundation.

As biometric technologies become more and more ubiquitous the issue spreads from those involved with illicit activities to law-abiding citizens by increasing the amount and types of data samples that can be collected by law enforcement without express permission from a judge.


 More...

Virtual Reality Series Part IV

Thomas Topp - Wednesday, February 15, 2017

This article is the fourth and final in a short series about virtual reality (VR) and augmented reality (AR); having discussed the origins of the concepts and the applications of the technologies, and contemporary devices, this article will focus on adopting AR as a lifestyle.


Virtual Reality Series

Part IV: The Evolution of Mann


Steve Mann -- Professor at the University of Toronto’s Department of Electrical and Computer Engineering and Chief Scientist of the Rotman School of Management's Computer Design Lab -- is known as the “Father of AR” for good reason: he has been living in what he calls “computer-mediated reality” for over thirty-five years.

Mann’s current HMD, or what he refers to as “computerized eyewear,” is known as EyeTap Generation 4 and is physically attached to his skull, such that special tools are required for its removal. Because of this, Mann has been called "the world's first cyborg" by the Canadian press, though he himself dismisses the term as too vague.

Mann -- who has a doctorate in Media Arts from MIT, a Bachelor’s of Science degree, as well as both a Bachelor’s and Master’s degree of Engineering -- is Founder and Director at both the FL_UI_D Laboratory and the EyeTap Personal Imaging Lab. On the FL_UI_D website, Mann describes the group as one that “designs, invents, builds and uses wearable computers and digital prosthesis in ordinary day-to-day settings.”

Mann explains why he prefers the term “mediated reality” in his article published by Spectrum IEEE,

My computerized eyewear can augment the dark portions of what I’m viewing while diminishing the amount of light in the bright areas … For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me... I say that it provides a “mediated” version of reality.

While the “less interesting” AR is described as “the overlay of text or graphics on top of your normal vision.” Mann points out that this often makes eyesight worse, not better, by “obscuring your view with a lot of visual clutter.”

Mann believes that once a person has experienced day-to-day life with computerized eyewear, they’ll understand the numerous advantages it grants and will be reticent to give up new abilities. For example, Mann explains that his EyeTap includes an infrared camera capable of detecting subtle heat signatures, which allows him to “see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off.” Additionally, the EyeTap can enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in foreign languages.

In 2013, Google released its own version of the EyeTap, called Google Glass. The prototype was the first widely-known and commercialized computerized eyewear, though its development followed more than a decade after Mann’s first Generation EyeTap. Despite the many strides made by Mann over thirty years, Google Glass failed to incorporate several features that reduce eyestrain in the wearer.

Mann is expanding his influence and spreading his knowledge of wearable technology through work with his companies and with the IEEE (Institute for Electrical and Electronics Engineers). Decades ahead of the curve, Mann’s innovations continue to break down the barriers between man and machine.



 More...

Virtual Reality Series Part II

Thomas Topp - Wednesday, February 01, 2017

This article is the second in a short series about virtual reality (VR) and augmented reality (AR); having discussed the difference between VR and AR, as well as the origins of the concepts and technologies, this article will focus on the development of VR for training programs and a new frontier of amusement. Later articles pertain to contemporary VR devices and adopting AR as a lifestyle.


Virtual Reality Series

Part II: Training and Gaming


As discussed in the previous article, although the first head-mounted display (HMD), the “Sword of Damocles”, was invented in 1968, VR and AR remained the purviews of military research and video game design until the research boom of the 90’s. Both industries focused on increasing the systems’ immersiveness and responsiveness, resulting in more realistic graphics, wearable tech, and the expansion of VR and AR into niche fields.

Recognizing the revolutionary possibilities of VR, in 1966 the US Air Force commissioned Thomas A. Furness III to develop the first flight simulator. Working out of the Wright-Patterson Air Force Base in Dayton, Ohio, from ‘66 to ‘89, Furness developed advanced cockpit simulators for fighter aircraft. In 1982, the first training flight simulator, the Visually Coupled Airborne Systems Simulator (VCASS), was used to offer trainees a virtual environment where they could develop skills without real-world consequences to mistakes.

Beyond offering a safe environment for soldiers to learn, simulators also allow trainees to experience various scenarios, landscapes, and situations. By using a virtual program, soldiers are able to repeat training exercises, and therefore get more training hours with less down-time; simulators are also much cheaper and eco-friendly than in-air flight training.

Using flight or driving simulators allows the trainee to become familiar with the controls and handling of expensive, and potentially lethal, vehicles before being placed in the cockpit or being the wheel. Simulations can also be used to train medical personnel to better perform complicated surgeries and become familiar with various procedures in a controlled, corrective and repetitive manner.

The other major industry for VR and AR during the decades before it became relatively mainstream was video gaming. In particular, Atari played a key role by hiring Jaron Lanier and Thomas G. Zimmerman, who would later go on to co-found VPL Research in 1984. VPL Research is credited with developing early wearable tech, such as the Data Glove, allowing people to manipulate virtual objects in three dimensions; the Eye Phone, an HMD that tracks head and eye movements; and the Data Suit, a full body outfit covered in sensors that allows measurement of arm, leg, and trunk movements.

Unfortunately, the technology remained prohibitively expensive for the daily consumer, and VR for the layman was largely relegated to arcades. For example, in 1991 Virtuality released the first mass-produced, networked, multiplayer VR entertainment system under the same name. A Virtuality system costed roughly $75,000, and contained multiple player pods, headsets and exoskeleton gloves, making the system the first immersive VR experience available to the public. Other VR arcade systems were more widespread, such as driving and first-person shooter games, some of which incorporated haptic feedback to more fully immerse the player.

The next article in this series will delve deeper into contemporary -- here meaning “since 2000” -- VR and AR devices, particularly those developed for personal use.  More...