• Aucun résultat trouvé

Design of an improved electronics platform for the EyeRing wearable device

N/A
N/A
Protected

Academic year: 2021

Partager "Design of an improved electronics platform for the EyeRing wearable device"

Copied!
67
0
0

Texte intégral

(1)

Design of an Improved Electronics

Platform for the EyeRing Wearable

Device

MASSACHUSET$rs M E

by

OF TECHNOLGy

Kelly Ran

JUL 15 2014

S.B. E.E., M.I.T 2012

LIBRARIES

Submitted to the Department of Electrical Engineering and Computer Science in Partial Fulfillment of the Requirements for the Degree of

Master of Engineering in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology

September 2013

@Massachusetts

Institute of Technology. All rights reserved.

The author hereby grants to M.I.T. permission to reproduce and to distribute publicly paper and electronic copies of this thesis document in whole and in part

in any medium now known or hereafter created.

Signature redacted

Author:

Department of Electrical Engineering and Computer Science

September 2 7, 2013

<:-_

Signature

redacted

Certified by:

Pattie Maes, Alexander-W: ~reyfoos Professor of Media Technology, MIT Media Lab Thesis Supervisor

September 27, 2013 A

t

Signature redacted

Accepted by:

Albert R. Meyer

(2)
(3)

Design of an Improved Electronics

Platform for the EyeRing

Wearable Device

by Kelly Ran

Submitted September 27, 2013 in Partial Fulfillment of the Requirements for the Degree of

Master of Engineering in Electrical Engineering and Computer Science Supervisor: Professor Pattie Maes

Abstract

This thesis presents a new prototype for EyeRing, a finger-worn device equipped with a camera and other peripherals. EyeRing is used in assistive technology ap-plications, helping visually impaired people interact with uninstrumented environ-ments. Applications for sighted people are also available to aid in learning, naviga-tion, and other tasks. EyeRing is a wearable electronics device with an emphasis on natural gestural input and minimal interference. Previous prototypes used as-semblies of commercial-off-the-shelf (COTS) control and sensing solutions. A more custom platform, consisting of two printed circuit boards (PCBs), peripherals, and firmware, was designed to make the device more usable and functional. Firmware was developed to ameliorate the communication capabilities, microcontroller func-tionality, and system power use. Improved features allow the pursual of previously unreachable application spaces. In addition, the smaller form factor increases us-ability and device acceptance. The new prototype improves power consumption by

(4)
(5)

Acknowledgements

I would like to thank Pattie Maes for the wonderful opportunity and resources to work on EyeRing. Thank you for welcoming me into your group and sharing your advice and time. I truly value the huge learning experience that you have enabled and the environment you have created in the Fluid Interfaces group.

Thanks to Roy Shilkrot for guidance and encouragement. I deeply appreciate the energy that you've put into helping me. Thanks to the entire EyeRing team for your previous work and current advancements. Thanks to Cassandra Xia for problem-solving advice and for being a superlative officemate.

Thanks to Steve Leeb and the Course 6 department for providing a TAship from which I learned a lot.

Thank you to my family and friends, whose undying love and support have sustained me through my time at MIT. Thank you to my labmates, solar car teammates, and wolf pack buddies for the good times. Thanks to the MIT community for teaching me the meanings of IHTFP.

(6)
(7)

Contents

Abstract 3 Acknowledgements 5 List of Figures 9 List of Tables 11 1 Introduction 13 1.1 Motivation . . . . 13 1.2 EyeR ing . . . . 15

1.2.1 EyeRing motivation and basic system . . . . 15

1.2.2 Applications . . . . 16

1.3 Project objectives . . . . 18

1.4 Chapter summary . . . . 18

2 Background 21 2.1 Previous work in wearable electronics . . . . 21

2.1.1 Existing ring-like and seeing projects . . . . 22

2.2 Current solutions for the visually impaired . . . . 24

2.3 EyeRing background . . . . 27 2.4 Chapter summary . . . . 28 3 Prototype design 29 3.1 Design preliminaries . . . . 29 3.2 Electrical design . . . . 30 3.2.1 Form factor . . . . 30 3.2.2 Energy budget . . . . 31

3.2.3 Data acquisition and transfer . . . . 33

3.2.4 Peripheral inputs and outputs . . . . 35

3.2.5 Control and firmware . . . . 35

3.3 Mechanical design . . . . 36

3.4 Design phase conclusions . . . . 37

3.5 Chapter summary . . . . 38

(8)

Contents 8

4 Prototype implementation 41

4.1 Debug board ... 41

4.2 Final prototype ... ... 43

4.3 A note on component selection . . . . 47

4.4 Chapter summary . . . . 48

5 Firmware implementation and discussion 49 5.1 Functional overview . . . . 49

5.2 More firmware details . . . . 51

5.3 Chapter summary . . . . 52

6 Evaluation 53 6.1 Performance and Discussion . . . . 53

7 Conclusions and suggestions 55 7.1 Conclusions . . . . 55

7.2 Suggestions for further work . . . . 55

7.3 Sum m ary . . . . 57

A EyeRing schematics and layouts 59

(9)

List of Figures

1.1 1st-generation EyeRing . . . . 16

1.2 lst-gen EyeRing system block diagram . . . . 17

2.1 Honeywell 8600 Ring Scanner [1]. . . . . 23

2.2 Sesame RFID ring [2]. . . . . 23

2.3 Fingersight prototype [3] . . . . 24

2.4 OrCam product with glasses [4]. . . . . 24

2.5 Close-up of refreshable Braille display [5]. . . . . 26

2.6 Software architecture for first-gen EyeRing [Shilkrot]. . . . . 28

3.1 2nd-gen EyeRing system block diagram . . . . 30

3.2 lst-gen white EyeRing . . . . 31

3.3 Clay model (target volume). . . . . 31

3.4 EyeRing battery . . . . 33

3.5 1st-gen device block diagram . . . . 34

4.1 Debug board ... ... 41

4.2 Omnivision OV9712 camera. . . . . 42

4.3 Front view of camera module. . . . . 44

4.4 Rear view of camera module. . . . . 45

4.5 Side view of the new boards. . . . . 45

4.6 Other side view of the new boards. . . . . 46

4.7 Top PCB ("A") stack-up. . . . . 47

4.8 Bottom PCB ("B") stack-up. . . . . 47

5.1 Functional block diagram for firmware. . . . . 50

6.1 Prototype on finger. . . . . 53

(10)
(11)

List of Tables

3.1 Sample battery energy densities. . . . . 32

3.2 Selected components . . . . 37

3.3 Theoretical power losses of selected components . . . . 38

6.1 Comparison of EyeRings . . . . 54

(12)
(13)

Chapter 1

Introduction

This chapter covers the project's motivation and objectives.

1.1

Motivation

This project is motivated by the potential impact of wearable electronics. As elec-tronic components have diminished in size and technology has advanced, the use of mobile devices has exploded. Wireless device penetration in the United States (US) is 102% [6], so it is reasonable to assume that most of the US population use wireless mobile devices (as some people have multiple devices). As widespread as they are, mobile phones have a computing interface whose main modalities are leftover from the personal desktop computer: users type on a keypad interface for input and see information on a visual display. Modern technology has enabled other interesting paradigms, and we explore that of gestural input for wearable electronics.

In the past two decades, intentionally wearable devices (such as headsets, smart watches, and augmented glasses) have been prototyped and made available as

con-sumer devices. Currently, a host of wearables exist that bring to life the artifacts1

that have been hypothesized for decades [7]. These devices have myriad

applica-tions and fall under the overarching goal of improving human capabilities. Many areas of exploration exist in the design space. We are motivated by the search for a wearable that is relatively seamless to use. We also are motivated by the natural gestural language that humans use.

'Tools or devices that augment human intelligence

(14)

14

Qualities that we seek in a wearable are: natural, using gestures and commands that humans naturally intuit; immediate, which means instantly on-demand as opposed to accessible after unlocking a screen and navigating to an app; non-disruptive, so that we may use our senses and bodies unhindered, especially our sight and our hands; and discreet, to increase portability, mobility, and user ac-ceptance.

This project was carried out in the Fluid Interfaces group of the MIT Media Lab. The goal of the group is to explore new ways that people can interface and inter-act with their everyday environments. Areas of interest include wearable devices, augmented reality, and intellect augmentation. The EyeRing project is creating a wearable device that has both universal and niche applications. Our original pro-totype was designed as assistive technology for the visually impaired. Shilkrot and Nanayakkara were motivated to help the blind "see" visual information that sighted people take for granted.2 Reports show that many VI individuals cannot perform

tasks that are required to function as independent adults. For instance, many VI people are not able to manage their finances independently or even withdraw cash from an ATM. Worldwide, there are 285 million visually impaired people. Of them,

39 million are blind [8] and could benefit from our work.

There are two main areas of text-based information that the blind must interpret. The first area is reading published text in paper or digital form. The second area of interest is reading text in everyday contexts, like in public transportation maps, food & medicine labels, and restaurant menus. In both areas, studies have shown that blind people overwhelmingly have difficulties in accessing relevant information. Text-based interpretation tools exist, but many either are cumbersome to use, are not mobile, or cannot parse all of the types of information available. The Royal National Institute of Blind People found that out of the 500 visually impaired peo-ple surveyed, only 23% managed their finances compeo-pletely independently. Various activities eluded their grasp: for example, 45% of the people "experienced difficul-ties with distinguishing between bank notes and coins." One person reported that "because [ATMs] are different therefore I'm unsure what service I am asking for because I cannot see what I am pressing on the machine" [9].

2

"Visually impaired" (VI) refers to those with varying degrees of eyesight degradation. Our assistive technology research is relevant mainly to the completely blind and those with severe sight impairment. In this paper, the terms "visually impaired" and "blind" are interchangeable and refer to the subset of the population who could benefit from EyeRing.

(15)

Chapter 1. Introduction

Not only do blind people have disadvantages in reading text, but they also have issues with interpreting other visual information like colors, depth, and object lo-cations. For example, a 2012 interview with a blind user revealed that in order to select a shirt to wear, the user needed to unlock his mobile phone, pick the right application, and use the phone's camera to find out the color of the shirt [10]. Thus, we see the need for a device to aid the blind, because the world requires much visual sense to interpret. Existing solutions can be very difficult to use. For example, many VI people have difficulting aiming devices correctly at text, such that their results are subpar due to poor focusing or centering. A finger-mounted, discreet, feedback-enabled device could help the blind interface with their environments more easily and parse more information that would otherwise be unavailable to them. Finger pointing has been shown to be a natural gesture in many cultures [11]. Despite the universality of the gesture, relatively few finger-worn user interfaces have been developed. EyeRing addresses the opening in this design space. The EyeRing project also investigates how devices can provide just-in-time information without being obtrusive and without impeding the user's natural experience. This author was motivated by the idea that custom hardware could greatly increase the functionality and user acceptance of EyeRing, and could erode the disadvantages that the blind face.

1.2

EyeRing

The EyeRing project aims to use pointing gestures to make multimodal input more intuitive and natural. EyeRing was originally developed to aid the blind in reading visually communicated information, like price tags and product labels. We are now exploring applications for sighted people as well.

1.2.1

EyeRing motivation and basic system

The basic EyeRing system, first conceived in 2012, consists of the EyeRing device and a host that performs application processing. The EyeRing device consists of a microcontroller, an wireless communication module, peripheral I/O, and energy storage. The previous implementations ("first-generation") have used the Teensy2

(16)

Chapter 1. Introduction

Arduino-based microcontroller, the RN-42 Bluetooth module, an Omnivision-based camera module with image compression engine, a push button, and a lithium-polymer battery with a Sparkfun control board. A side view is shown in Figure

1.1.

Battery, power management,

Camera microcontroller

FIGURE 1.1: A side view of a first generation EyeRing implementation.

The push button and camera are input peripherals whose data are sent via Bluetooth to the host, which can be a mobile phone or a desktop computer. The host then uses the input data to gather information and send relevant output to the user. In some applications, the host also uses its microphone and parses spoken inputs from the user. The host typically outputs audio speech that the user can hear. A system block diagram is shown in Figure 1.2.

1.2.2

Applications

EyeRing's assistive technology applications have been demonstrated to help the visually impaired, especially the completely blind, do everyday tasks more easily. Applications have been developed for identifying currency and supermarket items, reading price tags, and copying and pasting text. User studies showed that these applications helped the blind in shopping and using computers without the need for assistance [10]. Currently the EyeRing team is also working on a text reading application for published documents and everyday items (like printed books and financial statements). We believe that a two-pronged approach (object recognition & text reading) will be most useful the the blind.

(17)

User

Input button clicks and Aural feedback pointing at target

EyeRing Host:

Device: Otiec recognition and

Image capture database look-up

imnage data

FIGURE 1.2: EyeRing system block diagram for previous implementations.

Because EyeRing's technology can be useful to many subsets of the population, we are developing applications for sighted people. People use the pointing gesture when querying objects, as in "what is this?" "how does it work?" or when drawing attention to an extraordinary object or event: "hey, look over there at the sunset!" Pointing is also used to represent spatial reasoning: "move that box to the left." When the pointing gesture is combined with EyeRing's computational abilities, we can imagine useful applications. The host processor could hypothetically perform object and character recognition, mine databases, acquire and upload information from the cloud, store inputs into folders or documents, communicate with other people, and more.

We reasoned that learning is a core application of EyeRing, because pointing is a canonical inquiring gesture, and retrieving educational information can be done with the host processor. We also identified navigation and "enlightened roaming" as another key application. Learning about local landmarks and buildings could be useful for both tourists on a mission and inquisitive explorers who just want to play. Finally, we recognized life and health logging as other potential applications. Taking images and recording motion, users could capture important life events as well as mundane meals to compose a portofolio of their lives.

(18)

We found the learning and navigation applications to be more technically interest-ing, so we are pursuing those. Earlier in 2013, an application was been developed for learning to read musical scores: the application reads sheet music, plays the notes audibly, and shows the user which piano keys correspond to the music,. We are now working on an application for indoor and outdoor navigation, which will supply just-in-time information based on location and gestures.

1.3

Project objectives

Due to the limitations in hardware, first generation EyeRing devices can be im-proved upon greatly. The Teensy's 8-bit microcontroller has limited functionality and is a bottleneck in data acquisition and transfer. In addition, the Teensy im-plementation has unreliable latency in running firmware. The RN-42 Bluetooth module ships with a software stack that limits the throughput of the communi-cations. Additional peripherals, such as motion capture sensors, could be added to track the user's finger speed and acceleration and to expand the gestural input types. The device could be made smaller to increase user acceptance, and its bat-tery life (and general energy budget) could be improved. Many of the limitations described can be resolved with custom hardware.

The primary objective of this project was to create a more custom hardware im-plementation ("second-generation") of EyeRing. Many of EyeRing's hypothesized use cases would be furthered and enabled by a second-generation device. Project benchmarks were: to demonstrably improve upon EyeRing's size, energy budget, and data capability; to develop a robust firmware suite; and to assist in the de-velopment of user-side applications. The EyeRing improvements could be made by substituting some or all of the COTS modules with custom PCBs. These custom boards would use components that were chosen to fulfill EyeRing's performance and efficiency needs.

1.4

Chapter summary

Wearable electronics have huge potential to influence human society, and we are particularly motivated by creating an intuitive, gestural wearable. Such a device

(19)

Chapter 1. Introduction 19

has many possible uses to help us learn new skills, navigate areas and discover places, and record life & health metrics. It could also greatly ameliorate the lives of blind people and narrow the gap between what sighted and what blind people can

"see."1

EyeRing, this project's device, has been prototyped before and shown as a proof-of-concept for assistive technology for the blind. It has also been implemented as a learning tool for sighted people. EyeRing's full potential could be explored with a better design, which is the goal of this research. We envision EyeRing as a tool for both organized and whimsical learning. One could use EyeRing in a classroom setting to learn to read music, and then go outdoors to explore and learn about nature. In both scenarios, EyeRing could snap memories and salient tidbits of one's experiences.

(20)
(21)

Chapter 2

Background

Relevant wearable electronics projects are presented. Devices that are similar in form and function to EyeRing are listed. Limitations and tools pertaining to the visually impaired are covered. Previous work on EyeRing is explained.

2.1

Previous work in wearable electronics

As discussed in the previous chapter, we are interested in creating a gestural wear-able device with the following qualities: natural, immediate, non-disruptive, and dis-creet. The purpose of creating such a wearable is to further the cause of intelligence-augmenting devices. Our dream for such devices is to empower humans with joy, purpose, and experiential fulfillment.

As EyeRing is at the intersection of several fields (i.e. gesture-based, finger-worn, wearable, intellect-augmenting devices), we will explore previous work in those re-lated categories from the recent past.

The pointing gesture was used in Bolt's Put-That-There, an MIT project where a user could create and manipulate shapes by speaking and pointing. Put-That-There would parse the user's speech, recognize the pointing direction, and display

the environment with a room-size projection and several computer monitors [12].

In the 1990's, emerging technologies enabled researchers to create the first wave of wearable devices. MIT's Starner and Rhodes explored augmented memory with the Remembrance Agent, which used a field-of-view display to show search results of

(22)

Chapter 2. Back qround 22

emails and past documents. Adding a head-mounted camera to this display allowed the user to point at display options with her finger [13]. Thus, a mobile, wearable device to show relevant just-in-time information was born. A decade later, MIT's Maes and Merrill created the Invisible Media project, which provided a hands-free audio "teacher" or "recommender" to the user, who could point at or manipulate queried objects [14]. The user could wear a ring and point at instrumented scenarios or objects. One application was My-ShoppingGuide, where users could experience personalized suggestions for food and health products. Another application was Engine-Info, where users could learn about internal combustion engine components. Invisible Media used Rekimoto's two guidelines that wearable technologies should be hands-free and socially acceptable.

Currently, augmenting mental and physical intelligence is manifested in many ways: enforcing habits, providing just-in-time information, recording health metrics, and life logging. Some well-known recent devices use augmented reality to display infor-mation to users. Mistry's Sixth Sense uses projected images along with a camera to capture hand gestures, and Google Glass uses a heads-up display. Smart watches are gaining in popularity and include Pebble and the Samsung Galaxy Gear. These watches do away with the inconveniences of mobile phones. Health monitoring de-vices include Misfit Shine, Fitbit products, and Nike Plus. Life logging projects include Sensecam and Memoto.

2.1.1

Existing ring-like and seeing projects

We have surveyed solutions that are similar to EyeRing in form and/or function. Existing ring-like devices typically perform single functions. Of these devices, the optical sensing types use the pointing gesture. Logisys, Mycestro, and Brando are among those who produce optical finger mice for interfacing with computers. Typ-ically the sensor camera faces distally, and a scrollwheel is mounted on the mouse's side for thumb scrolling. Various manufacturers produce finger-mounted optical barcode scanners for package handling and inventory needs. These devices are usu-ally quite large and include a thumb-actuated button. See Motorola's RS-1 Ring or Honeywell's 8600 Ring Scanner (shown in Figure 2.1). Sesame Ring, from the MIT-based startup Ring Theory, uses a finger-mounted RFID tag to interface with public transportation card readers. This device allows seamless subway (MBTA

(23)

"T" trains) access by tapping the ring against a subway entrance kiosk. See Figure 2.2 for a product rendering.

FIGURE 2.1: Honeywell 8600 Ring Scanner [1].

Ubi-finger was a Japanese project that used a sheath-like finger-worn device to remotely control objects. By pointing and clicking at light switches, audio speakers, and other devices, the user could turn on/off and adjust the device controls. Fingersight used a camera and a vibration motor to help the visually impaired sense and identify objects in a 3-dimensional space. Explored by researchers at Carnegie Mellon University, Fingersight provided haptic feedback to users when the device "saw" object edges or features from a database. Fingersight could also allow the user to control a simple interface, like a light switch, after identifying it [3]. Fingersight used a harness to connect to a host processor (as seen in Figure 2.3).

OrCam is an Israeli product that parses visual information from a head-mounted camera (see Figure 2.4). A host processor, wired to the headpiece, performs object recognition and generates speech, which is then conveyed to the user through a

FIGURE 2.2: Sesame RFID ring [2].

(24)

Chapter 2. Background 24

FIGURE 2.3: Fingersight prototype [3].

FIGURE 2.4: OrCam product with glasses [4].

bone-conduction earpiece [4]. OrCam uses an object database and has implemented machine learning so the device can add to its database.

2.2

Current solutions for the visually impaired

As explained in the previous chapter, blind people need more ways to access visually conveyed information. This is due to the following limitations:

" Blind-friendly publishing standards for digital multimedia are not universally

used.

" Not all digital media can be converted into blind-friendly formats anyways,

due to legal issues.

" Even when media are converted to accessible formats, solutions (like software

or devices) can be cumbersome and difficult to use.

24

(25)

Chapter 2. Background

* Not many solutions exist to tackle other forms of visual data, like colors, depth, object proximity, etc.

Many popular e-books and documents are available in blind-accessible formats such as Braille, large text, and audio description (AD). The advent of electronic readers and portable Braille readers has helped disseminate such formats. According to the Royal National Institute of Blind People (RNIB), an advocacy and research group in the U.K., 84% of the top ten thousand most popular e-books were fully accessible to the blind [15]. However, the World Blind Union estimates that only 1%-7% of worldwide published books are converted to blind-accessible formats [16]. Thus, less popular books, textbooks, scholarly articles, and other documents are largely unavailable to the blind.

Copyright restrictions are a hindrance to converting documents blind-accessible for-mats. For instance, the 1996 Chafee Amendment allows copyright-protected liter-ary documents to be converted by non-profit organizations. However, this is still limiting because such organizations have relatively low bandwidth, and converted documents cannot be shared with people in other countries (save for inter-library loan agreements) [17]. More recently, The World Intellectual Property Organiza-tion (WIPO), an agency of the United NaOrganiza-tions (UN), passed a treaty that bypassed copyright law to benefit blind access. Some nations (including the United States) opted not to sign the treaty [18]. Even if the US were to sign and ratify the treaty, it is not guaranteed that the treaty would be implemented fairly, given the anti-treaty stance taken by powerful lobbies like the Motion Picture Association of America

(MPAA) [19].

Existing solutions, like scanners and software to read documents, are useful. How-ever, surveyed blind users commented that many such softwares can be inflexible, reading from the top to the bottom of the document whilst the user only wanted to scan sentences throughout the page. In addition, some books are not available online. Furthermore, the RNIB reports that the elderly blind tend not to use com-puters at all (12% use comcom-puters in the age group over 75) [20].

Format conversion solutions exist and can help. However, as explained in the pre-viously, many documents cannot be converted to blind-accessible formats due to legal restrictions. Text conversion methods are also limited by publishing stan-dards and technologies. One of these stanstan-dards is DAISY' Digital Talking Book,

'Digital Accessible Information System

(26)

26

which incorporates MP3 and XML data with electronic multimedia, like books or magazines. DAISY files can be easily parsed into audio, large text, or refreshable Braille. EPUB3 is a digital publishing standard that, among other things, includes DAISY-compatible features. For example, EPUB3 calls for mathematical equations to be embedded in XML format instead of as images. This way, the equations can be read out to blind users. EPUB3 has been hoped to be a universal publishing format for digital multimedia. However, many publishers prefer to use proprietary formats to lock in customers [21]. In addition, application and device manufac-turers don't always comply: among Adobe Digital Editions, Amazon Kindle, Apple iBooks, CourseSmart, Google E-Books, Nook, Safari Books Online, and many other readers and apps, not one has complete EPUB3 compatibility [22]. Only the future will tell if EPUB3 will catch on universally. Until then, format compatibility will be a recurring thorn in blind users' sides.

When blind-friendly file formats are used, they often work in tandem with screen readers or Braille devices. Screen readers, which are programs that use speech syn-thesizers to read text, include JAWS, HAL, Window-Eyes, and Blio. Some e-readers and mobile devices have built-in speech synthesizers, like Apple iOS' VoiceOver. Plugging into computers or mobile devices, refreshable Braille displays can convert digital text into Braille. These displays use moveable pins to reconfigure the Braille surface that users can read. Figure 2.5 shows the array of pins that form the Braille characters. Braille converter software, like Braillemaker and Dolphin Easy-Converter, convert digital text into digital Braille that can then be embossed onto paper. Braille labelers allow users to mark everyday items for identification (e.g. to mark prescription medications for differentiation). Labelers like 6dot can work with a keyboard interface, allowing ASCII-to-Braille translation, or can work with a 6-button interface to explicitly call out the Braille dot configuration.

FIGURE 2.5: Close-up of refreshable Braille display [5].

(27)

27

Many natural environments contain visual information that has not been addressed widely. The RNIB reports that 77% of non-shopping blind people do not go shop-ping because they have "difficulty seeing prices or reading labels" [201. An RNIB survey on medicine labels found that a majority of surveyed VI people would use a hypothetical pen-like device that could read labels, as Braille labelling can be inconsistent in quality. (For example, 73% of the 120 surveyed said that they had experienced Braille labels that had been covered up by other labels, making the Braille unreadable [23]. OrCam, mentioned earlier in this chapter, is a new product that aims to recognize objects. Fingersight, the CMU research project, can identify depth and textures, but no existing commercial product has similar functionality. Finally, no known products combine object recognition, text reading, and sensory information.

2.3

EyeRing background

As described in the previous chapter, EyeRing came about by thinking of finger-pointing scenarios. With the first-generation EyeRing, several applications were developed. An EyeRing ShoppingAssistant application had two modes to help blind people shop: CurrencyDetector to identify bills of money, and TagDetector to read

UPC barcodes on product labels. A user group of 29 blind people commented,

almost unanimously, that the device was very easy to use and would solve some of their everyday challenges. In a supermarket, a blind user could use the EyeRing to choose a package of Cheez-Its out of shelves of morphologically similar foodstuffs. The software architecture for ShoppingAssistant is shown in Figure 2.6. Another application, DesktopAssistant, aided sighted people in "copying" images and text from the environment and then pasting the visual data into a digital document. Most of the surveyed users agreed that EyeRing was useful in their figure and text insertion.

FingerDraw, a project spun off from EyeRing, is a joint research project from MIT and the Singapore University of Technology and Design (SUTD). Using the EyeR-ing hardware, FEyeR-ingerDraw can capture colors and textures from the physical world.

Users can later use the captured palette in their tablet-drawn artwork, thus keep-ing natural environments connected to digital creations. Fkeep-ingerDraw recalls the Chapter 2. Background

(28)

Chapter 2. Background

Android OS / IPhone OS

Image Computer Speech

Vision

~

Synthesis IEngn- 9 &j currency] ReO.-Button State Blue- controller tooth Module WidwOS/McS Computer Operating

Vision mae System Engine Services

Screen info. Clhpboard

Track Cursor

Button

FIGURE 2.6: Software architecture for first-gen EyeRing [Shilkrot].

selection and mixing of oil paints for artwork and also draws inspiration from fin-gerpainting. FingerDraw is mainly used as educational technology for children to

stimulate creativity and to encourage natural, multisensory experiences [24].

More recently, joint research from MIT and SUTD explored feedback modalities for EyeRing prototypes. This ongoing research analyzes the effects of using haptic feedback (vibromotors) in different spatial and temporal configurations.

Although EyeRing has received overwhelmingly positive feedback, many obstacles stand in its way to becoming a truly usable wearable. The hardware prototype could use revision, and the host application software could also be expanded and improved.

2.4

Chapter summary

In this chapter, we first discussed wearable electronics and intelligence-augmenting technology. A brief history of wearable electronic devices similar to EyeRing were covered. The state-of-affairs and technologies for blind people were described. Fi-nally, previous work in the EyeRing project was presented.

(29)

Chapter 3

Prototype design

This chapter presents the design considerations for the new Eyering prototype. Elec-trical, mechanical, and firmware specifications are discussed, and design decisions are reviewed.

3.1

Design preliminaries

Our high-level concept for the second-generation EyeRing is to support powerful and valuable applications for sighted and blind people. The requirements for such

applications drive the electrical specifications. Many of the current Eyering

ap-plications could be improved with better electrical characteristics, and previously unreachable applications could be implemented. The main improvement fields are form factor, energy budget, and data transfer rate.

Additional desired features are gesture and motion capture, audio output, and di-rectional (compass) capabilities. With previous Eyering prototypes, users reported that it was difficult to aim the EyeRing camera properly. To address that concern, we add two feedback modalities to the second-generation EyeRings. First, a set of vibration motors guides blind users in text reading. Additionally, a laser diode beam shows sighted users where they are aiming the camera. We also added a mo-tion processing unit (MPU), consisting of a 3-axis accelerometer and a gyroscope. This unit captures gestural input.

(30)

Chapter 3. Prototype design 30

User

/

Aural feedback

Input button clicks and pointing at target

Image and gesture data

Host:

Object recognimo and database took-up

Haptic and visual

'q-- feedbackW

FIGURE 3.1: EyeRing system block diagram for new design.

The new EyeRing system is much like the previous design. Note in Figure 3.1 that the new design includes haptic and visual feedback, and that the EyeRing device captures gestural data in addition to images and button presses.

3.2

Electrical design

3.2.1

Form factor

Obviously, size and shape are major factors when users decide to use a wearable technology. Previous Eyering iterations have been roughly 3.25 cubic inches in volume and have assumed both block-like (Figure 3.2) and aerodynamic shapes. The battery is one of the most volumetric components of the electrical system. There is an clear and inherent tradeoff that the smaller the battery, the less charge it holds. Thus, given a certain battery chemistry, a smaller and more compact design would have less energy available to power the electronics. The design assumes an energy budget for the entire device, calculates a necessary battery charge capacity,

Haptic and vsu feedback

EyeRing Device:

Image and gesture capture

(31)

Chapter 3. Prototype design 31

FIGURE 3.2: A first generation EyeRing with white housing.

and then chooses the smallest possible battery. The energy budget is discussed in more detail in Section 3.2.2.

The peripherals (sensors and input) and microcontroller components were chosen to be as small as possible. For shape and usability, the obvious requirements are that the camera must face a useful direction and the input devices must be intuitive and easy to trigger. We found that actuating input buttons with our opposable thumbs was the best method. It was determined that a forward-facing camera and side-mounted input button would be best. It suffices to say that care was taken to minimize volume and area when selecting components and laying out the 6-layer

PCB.

We used previous mock-up clay models to figure out a reasonable target volume. The clay model is shown in Figure 3.3.

FIGURE 3.3: Clay model (target volume).

3.2.2

Energy budget

Currently, users expect to be able to use personal electronics for at least a few hours on a single battery charge. The first generation EyeRing design lasts for about one

(32)

Chapter 3. Prototype design

hour with moderate camera use (i.e. roughly one camera shot per second). We aim for the second generation to last an order of magnitude longer (and thus be able to support moderate use for a full work day).

The energy budget accounts for: stored energy (battery), consumed energy (by the electronics), and gained energy (through harvesting techniques). Harvesting techniques such as photovoltaic, thermal gradient, piezoelectric, and radio frequency harvesting were investigated and deemed suitable for perhaps future generations of EyeRing. The added bulk and complexity of such harvesting systems are not worth the amount of energy available for harvest.

Stored energy can be approximated by battery charge capacity (Amp-hours) mul-tiplied by nominal battery voltage (Volts) to result in Watt-hours. This is an approximation because the battery voltage is follows a nonlinear curve through its discharge. Watt-hour energy divided by required usage time (in hours) gives the maximum average power that can be supplied by the battery. Because EyeRing is a wearable device, we seek a battery that is energy-dense in terms of volume and mass. In addition, the battery must be able to deliver enough current to supply the Bluetooth module's power-hungry transmissions. Typical characteristics for differ-ent battery chemistries are shown in Table 3.1. As shown, lithium-ion (Li-ion) and lithium polymer (LiPo) battery cells have relatively high energy density.

TABLE 3.1: Sample battery energy densities.

Chemistry Manuf. or Part num- Mass Volume Energy Energy per Energy Notes vendor ber (g) (cm3) (W-h) unit mass per unit

(W-h/g) volume

(W-h/cm 3)

Lithium Panasonic NCR18650B 47.5 15.5 12.06 0.25 0.78 Cylindrical. Too large

ion I for EyeRing.

Lithium SparkFun PRT-00731 2.65 1.9 0.4 0.15 0.21 Pouch. Chosen for

Ey-polymer eRing

Lithium Panasonic CR2032 2.81 1 0.75 0.27 0.75 Coin cell. Primary,

ion insufficient current

sourcing.

Lithium Infinite MEC202 0.98 0.23 0.01 0.01 0.04 Thin-film. High

cur-thin-film Power rent capabilities, good Solutions for energy harvesting. "High En- Infinite N/A N/A 0.1 0.34 N/A 3.4 Available 2014; coin

ergy Cell" Power cell form factor.

Solutions

The Sparkfun lithium polymer batteries were chosen due to their electrical charac-teristics, ease of acquisition, and rectangular form factor (for easy packing into a ring housing). The Panasonic 18650 form factor cells have high energy density and are used in laptops and high-performance electric cars, but are too large to use in

(33)

Chapter

3.

Prototype design

33

FIGURE 3.4: The lithium polymer battery used for both 1st- and 2nd-generation

EyeRing devices.

EyeRing. The Panasonic CR2032 and other similar coin cells are not rechargeable and cannot source enough current. The IPS MEC202 is great for energy harvesting (due to its ability to withstand large voltages and charge at high currents) and may be a good candidate for future EyeRing prototypes. The IPS High Energy Cell is a coin cell replacement that is extremely energy-dense. As of summer 2013, it was not available, but may be good to keep in mind for the next EyeRing generation. We did consider other battery chemistries, like zinc-air batteries for hearing aids, but chose to stay with lithium polymer.

In the final 2nd-generation prototypes, we chose Linear Technology's LTC3554 Power Manager IC for charging the battery via USB and for controlling two buck converters that produced 1.8V and 3.3V. We also added a resistor divider as an in-put to one of the microcontroller's ADC pins. This divider allowed the measurement of the battery voltage for telemetry purposes.

The following sections describe the selection of other ring components, and the theoretical energy budget for these components is at the end of this chapter.

3.2.3

Data acquisition and transfer

Previously in first-generation devices, EyeRing was capable of sending VGA-resolution images at a maximum of 1 per minute. As seen in Figure 3.5, potential bottlenecks were: camera processing rate, camera-microcontroller SPI rate, microcontroller pro-cessing, microcontroller-Bluetooth UART rate, and Bluetooth module limitations. The theoretical data capability of the RN-42 Bluetooth module is 240kbps, using

the Serial Port Profile (SPP). 1

'Over-air throughput is roughly 1-2Mbps.

(34)

34

FIGURE 3.5: First generation device block diagram.

For the new Eyering, we aim for VGA-resolution video at 10fps or higher. This

gives us a required data rate of 9.21megabytes per second, uncompressed 2. JPEG

compression typically reduces image data by an order of magnitude, so we aim for roughly 1 megabyte per second. We also require an embedded system that can move large chunks of data at this rate or higher. (Thus, a microcontroller or microprocessor with Direct Memory Access (DMA) capabilities is desirable.) Many applications require data rates in the order of 1-10Mbps. Technologies that allow that are WLAN, Bluetooth, Zigbee, NordicRF, etc. Many COTS modules limit packet size, etc, and are not customizable. The data transmission technology must be broadly applicable and ported. Therefore, proprietary technologies like Zigbee and NordicRF were ruled out because they require the host processor to be equipped with a non-standard transceiver. WLAN and Bluetooth were compared in terms of power consumption, device range, and complexity. Ultimately, Bluetooth was chosen. Although WLAN has a much higher range, EyeRing is meant to be paired with a nearby host device, so the required range is no more than a couple of meters. Between WLAN and Bluetooth, Bluetooth has favorable power consump-tion characteristics [251. Bluetooth is a personal area network (PAN) which is more aligned with the purpose of Eyering. In addition, the EyeRing device, like many wearables, is paired to a personal cell phone. Bluetooth is well suited to such ap-plications where an "accessory" device is tethered to a host. Finally, the Bluetooth Special Interest Group (SIG) is continually working on the standard to better suit

2RGB data is 3 bytes per pixel, and VGA resolution is 640 x 480 pixels.

1'-generation EyeRing device

Cameray2 ART - RN-42 Bluetoolth

Cenwa KcoanbtIfer Module

Module

GPIO

Push button input

Not shown: Battery and power management board

(35)

application needs. For example, Bluetooth Low Energy (BLE), the newest Blue-tooth version, has extremely low data rates but allows devices to run on very low power. This use model could work for some EyeRing packets, like simple telemetry

(containing battery voltage and user motion).

Among the smallest available Bluetooth modules, some came with "plug-n-play" Bluetooth software stacks, and others required the host microcontroller to run a Bluetooth stack. For maximum packet flexibility, we chose one of the latter Blue-tooth modules, the PAN1326. The PAN1326 was also the smallest BlueBlue-tooth mod-ule we could find, and is capable of BLE as well as classic Bluetooth.

3.2.4

Peripheral inputs and outputs

The MPU added can capture 3-axis velocity and 3-axis acceleration. The chip can be configured to provide microcontroller interrupts upon the following events: recognized gestures, panning, zooming, scrolling, free falling, high-G accelerating, zero-movement conditions, tapping, and shaking.

The thumb-actuated button was kept.

The camera module was initially replaced by a different camera & compression engine set, as described in the Debug Board section of Chapter 4, but due to the time constraints, we stayed with the C329 camera module. The camera module itself is capable of taking up to 15fps.

Haptic and visual feedback modalities were added. We designed a simple FET circuit to control two vibromotors. We also have a similar FET circuit to control a laser diode beam. More elaborate feedback mechanisms, like LED displays or projectors, were considered and discarded as too large and power-hungry to be practical.

3.2.5

Control and firmware

The previous EyeRings used the TeensyDuino 2.0, which is based on the Atmel AVR architecture and uses the Arduino IDE and programming language. Previous endeavors also explored the Teensy 3.0, which uses an ARM Cortex-based Freescale

(36)

MK20DX128. Based on our experience we desired the new control system to have the following attributes:

" Direct Memory Access (DMA) to allow speedy and concurrent data transfers. " Relatively high-speed communication modules (like UART and SPI)

" Low power consumption " Fast clocking ability

" Accessible programming software and support

" Relatively understandable firmware

/

programming options to allow the project to be passed on to other people

For second-generation EyeRings, we considered the following solutions:

" 8-bit microcontroller, e.g. Atmel AVR " 16-bit microcontroller, e.g. Atmel UC3

" ARM-based chips

" Programmable Logic Device-based chips, e.g. Cypress Semiconductor's

Pro-grammable System-on-Chip (PSoC) " FPGAs, e.g. Lattice iCE

We eliminated 8-bit microcontrollers due to their lack of desirable features (e.g. no DMA and slow clocking). PLDs and FPGAs were eliminated due to the desire to enable project longevity and inheritance. Between 16-bit and 32-bit microcon-trollers, Atmel UC3 was chosen due to its reasonably extensive support community, its provided firmware suite, and its free development tools.

3.3

Mechanical design

The mechanical aspect of EyeRing can be broken into two components: the ring housing and the PCB component selection, geometry, and assembly. Due to time

(37)

constraints, the ring housing was not covered in the scope of this thesis and will be designed and fabricated at a later date. The relevant PCB components (buttons, connectors, and mounting clips) are discussed in the next chapter, as their details are more relevant in the implementation stage.

3.4

Design phase conclusions

This chapter's discussion is summarized in Table 3.2, which shows the considered components for the new EyeRing design. The chosen components are highlighted, and their energy requirements are tabulated in Table 3.3.

TABLE 3.2: Selected components

Function Component Manufacturer

I

Control AT32UC3B1256 Atmel

microcontroller

Bluetooth PAN1326 Panasonic

(based on TI

chipset)

Image sensor C329 module based on

Om-nivision compo-nents

Motion capture MPU-6000 Invensense

For the power consumption table, we used component datasheet information. The battery voltage, which ranges from 4.2V to 3.6V, was approximated as 4V in places where it was used. For instance, the PAN1326 radio runs off of the battery voltage, so we calculated its power loss by using 4V and the datasheet-given current draw. Its worst case power, 160mW, should be averaged over time to determine the actual power consumption. The radio's effective duty cycle depends on how much data needs to be transmitted, sniff interval, and packet type.

The 1.8V-3.3V level shifter draws 5/A from each of its voltage rails, so we simply summed 1.8V and 3.3V for its calculations. The power manager draws varying amounts of quiescent current from the battery, depending on conditions, so we took the worst case current, 341pA, for our calculations.

(38)

TABLE 3.3: Theoretical power losses of selected components

Device Voltage Peak cur- Quiescent Worst Best case

(V) rent (mA) current case power power

(mA) (mW) (mW)

AT32UC3 3.3 N/A 15 N/A 49.5

B1256

micro

PAN1326 4 40 N/A 160 N/A

Bluetooth module,

power

PAN1326 1.8 N/A 1 N/A 1.8

Bluetooth

module, logic

TXBO106 5.1 N/A 0.005 N/A 0.03

level shifter

C329 cam- 3.3 64 20 211.2 66

era

MPU-6000 3.3 N/A 4.1 N/A 13.53

Power 4 0.341 N/A 1.36 N/A Manager

Total 130.86

The buck converters, which supply all non-battery-sourced currents, have inefficien-cies that we can estimate at no more than 20% of input power. So the worst case buck converter efficiency is 80%. The chosen Sparkfun LiPo battery contains about 400mW-h of energy. When we include dc-dc converter efficiency and Bluetooth transmission power, this means we theoretically run the ring for two hours on a single battery charge. We can extend this battery life slightly by using microcon-troller sleep modes. Althought this battery life does not meet our target, it is an improvement.

3.5

Chapter summary

First, overarching design goals and ideas for the second-generation EyeRing were presented. Then, design considerations were discussed in the areas of form factor,

(39)

Chapter 3. Prototype design 39

energy budget, wireless communication, control, mechanical assembly, and firmware. Possibilities were shown, and the "winning" candidates were explained.

(40)
(41)

Chapter 4

Prototype implementation

Because the final second generation prototype needed to be extremely small, we required a multilayer PCB with small form-factor components (mostly quad-flat no-leads (QFN) and thin quad flat (TQFP) packages). This construction is typically difficult to correct if there are errors, especially because many traces are internal. Therefore, a preliminary "debug" board was created to test the circuits. This chapter covers the implementation of the debug board and the final prototype.

4.1

Debug board

For quick design and turnaround, this two-layer board was designed in Advanced Circuits' PCBartist software. All surface mount components were broken out to through-hole vias. See Figure 4.1.

FIGURE 4.1: The debug board used to test 2nd-generation circuits.

(42)

Chapter 4. Prototype implementation 42

FIGURE 4.2: Omnivision

OV9712

camera.

Included in this PCB were:

" AT32UC3 microcontroller (B family). This part was a TQFP and had all pins

broken out. 8 GPIOs were used with LEDs as status & indication lights.

" PAN1326 Bluetooth module with broken out pins. An SOIC level shifter was

used to translate signals between 3.3V and 1.8V logic levels.

* Hardware for an Omnivision OV9712 camera (pictured in Figure 4.2): a flexible flat cable (FFC) connector and an image compression engine. The compression IC, the OV538-B88, comes in a BGA package and has a USB interface. Due to time constraints, we ended up not using this setup and instead used the C329 camera module that previous EyeRings used. The

C329 interface (SPI) is much less complex than the USB interface that the

OV538-88 uses.

" Micro-USB connector for programming and power.

* Jumper headers to allow modular testing. (Parts of the board could be elec-trically isolated from each other.)

" Buttons for triggering and resetting the microcontroller.

" Buck converters to go from the battery voltage to voltage rail levels. Unfor-tunately these circuits created saggy voltages which did not quite work. We used different components for the final board.

(43)

Chapter 4. Prototype implementation

4.2

Final prototype

Based on the successes and failures of the debug board, we chose components for the final board:

o We kept the microcontroller the same. Bypass capacitors were sized down to 0402, 0603, and 0805 (Imperial) chips.

o We kept the Bluetooth module and level shifter. The level shifter is offered in a QFN package (as well as the SOIC used previously), so we chose QFN to save board space.

* Due to ease of use, we chose the C329 camera module used in first-generation

EyeRings.

* Jumper headers were removed, and buttons were sized down.

o We chose a Linear Technology power management IC, the LTC3554, This

QFN chip has battery charging capabilities and also has two adjustable buck

converter controllers for creating 3.3V and 1.8V rails.

& A resistor divider was added to measure the battery voltage with an ADC on

the microcontroller.

o We added the MPU-6000 (Motion Processing Unit), a combination gyroscope-accelerometer.

The final second-generation EyeRing was laid out in Altium Designer and fabricated

by PCBFabExpress. See Appendix A for the schematics and layouts. Because we

wanted the footprint to be small, we designed two double-sided PCBs to be stacked vertically. The top board is known as board "A," and the bottom board is known as board "B." We decided on 6-layer boards because they were a good compro-mise between cost and enough layers for shielding/routing. The board component placement constraints were as follows:

* The camera module needed to face forward. The camera also ideally needed to connect to board B (instead of board A) to keep the ring's profile as low as possible.

(44)

" The laser diode needed to point forward. The laser beam needed to be aligned

as closely as possible to the camera's line of sight. Because of the C329 board's geometry, we could either mount the laser diode module to the side of the board (which would result in a laser beam that was very offset from the camera) or we could mount the laser diode inside the "sandwich" of the two custom PCBs (such that the beam exited through a mounting hole in the

C329 board). We chose the latter solution, pictured in Figure ??.

" A 2-row connector between boards A and B was chosen for its geometry so that

the boards would be the correct distance apart for laser mounting purposes.

" The PAN1326 Bluetooth Module needed to be placed on board A, on an edge

as specified in documentation, to minimize RF interference.

" The high-speed (USB) signal traces needed to be kept short, and preferably

sandwiched between copper planes to keep electromagnetic interference con-tained.

" The battery needed to be placed below all boards and components to lower the center of gravity of the device. In addition, this allows easier access to the boards for debugging.

FIGURE 4.3: Front view of camera module.

As seen in Figure 4.3 and 4.4, we hacked the camera lens by removing a screw hole feature to allow the laser beam to shine through the mounting hole. We also needed

(45)

ChaDter 4. Prototuve implementation 4

FIGURE 4.4: Rear view of camera module.

FIGURE 4.5: Side view of the new boards.

to replace the C329's original connector (not shown) with a part that we spec'ed out ourselves, due to lack of documentation on the C329 connector and mating parts. Given the above constraints, we determined layer stack-up and routing details as follows. The stack-ups can be seen in Figures 4.7 and 4.8.

* External layers (top and bottom layers) were reserved for signal routing due

(46)

Chapter 4. Prototype implementation 46

FIGURE 4.6: Other side view of the new boards.

to ease of access to component pins. Grounded polygon fills were used on these layers as well. Grounded through-hole vias were placed throughout to connect copper islands and to reduce current loops.

e A copper keep-out region was imposed on all layers of both boards underneath

the Bluetooth antenna.

e The power manager and buck converters were placed on the bottom of board

B to be near the battery. Keeping the battery voltage traces short decreases the resistance that the total current has to run through.

o The microcontroller was placed on the top layer of board B to ease routing of its lines to the camera module.

o As discussed before, the Bluetooth module was placed on the top layer of board A.

* Board A did not have as many signals to route, and had the luxury of 4 layers of copper planes, as only 2 layers were needed for routing.

46

(47)

e Board B required 3-4 routing layers, so we chose the 2 external layers and the 2 most internal layers for routing. The 2 internal routing layers were sandwiched in between 2 layers of copper planes.

Total Height (7.4ml

Core 0 2.6m"

Law1 Top Sig (GND pots) -- v 2mW

Layer2 GND ND) -- Core 12.6m11

Layer3VOUT (PMYVUT) -o repreg D2m&A

Layer4l GND CGND) - Core 02.6ln

Layer5l1.8 O.V --w

.AP Bottom Sig (GND pour)

-FIGURE 4.7: Top PCB ("A") stack-up.

Total Height [71.4mil

Core n26mi)

LeUi1 Sig (GND pots) -*2mW

Lay21WV. -3N Core (2.6

Ly3 Sig [GND pour) -Prepreg 02mi

Layee4 Sig GND pots) - Core V26mi

Laye Sig (GND pow)

-FIGURE 4.8: Bottom PCB ("B") stack-up.

4.3

A note on component selection

We aimed to keep the boards as small as possible, so we chose very small com-ponents. We realized that in doing so, we traded usability for space. 0402 chip components were very fiddly to work with, and the QFN chips required extra care in soldering. Testing points on the boards was difficult because the test points (sur-face pads) were so small. Moreover, the buttons that we chose were too small to actually be usable. We will need to add a button cap to the user input button. The microcontroller reset button is extremely difficult to press and we may replace that with a larger one. Future developers should buy such components ahead of time for testing.

47 Chapter 4. Prototype implementation

(48)

Chapter 4. Prototype implementation 48

4.4

Chapter summary

In this chapter, two second-generation EyeRing prototypes were discussed. The first was a PCB for debugging and development. The second consisted of two PCBs to be finger-worn. Implementation constraints and decisions were reviewed.

(49)

Chapter 5

Firmware implementation and

discussion

Firmware was a large portion of the work involved with this project. In this chapter, firmware overview and details are covered.

5.1

Functional overview

The firmware was written to support the desired ring functions: image capture, video capture, motion and gesture data, and low power consumption.

The functions and decisions shown in Figure 5.1 are described below:

9 init: When the ring is turned on, this function runs. First, it sets up the

microcontroller's GPIO and communication modules (SPI, UART). The direct

memory access (DMA)1 modules are set up for moving information beween

the camera, the micro, and the Bluetooth module. The microcontroller's SPI module syncs with the camera, and the UART module sets up the Bluetooth module. The PAN1326 requires a TI service pack to be sent every time it boots up, so the microcontroller sends that. The service pack consists of HCI-level commands that configure the PAN1326. After init, the microcontroller goes to sleep.

1

Atmel UC3 documentation refers to DMA as PDCA, Peripheral DMA Controller. This is because the UC3 micros are only capable of moving data between peripherals and internal memory. They cannot transfer data between blocks of internal memory.

Figure

FIGURE  1.1:  A  side  view  of a first  generation  EyeRing  implementation.
FIGURE  1.2:  EyeRing  system block  diagram  for  previous  implementations.
FIGURE  2.1:  Honeywell  8600  Ring  Scanner  [1].
FIGURE  2.4:  OrCam  product  with  glasses  [4].
+7

Références

Documents relatifs

Within the airport capacity planning framework, one potential research direction is ex- tending the capacity allocation model to account for differences in robustness goals

Figure 5 shows 10-day backward tra- jectories calculated from nine grid points around Watukosek at 550 hPa (about 5 km) from the measurement time. A major part of the trajectories

The image deblurring results presented in [6], [8] suggest that the first- order interpolation (e.g., ω i is a ramp within the range [0, 1] between the two adjacent grid points for

Our goal was to synthesize available data on vascular plant species distribution across sub tropical Africa in a single and comprehensive database in order to (1)

Ministère de l’enseignement supérieur et de la recherche Université D’Oran2 Ahmed Benahmed. Faculté des

Here we propose to solve this problem by the design of a fast zonal DCT-based image compression algorithm which allows an efficient tuning of the trade-off between energy

Furthermore, these design methodologies do not consider the collaborative aspect of the design and the development processes, meaning that similarly to the product development

This paper is a study of image retrieval based on color and texture features, the main algorithms include color histogram,color moment,gray level co-occurrence