• Aucun résultat trouvé

Choosing methods and metrics for design feedback

N/A
N/A
Protected

Academic year: 2021

Partager "Choosing methods and metrics for design feedback"

Copied!
19
0
0

Texte intégral

(1)

Choosing Methods and Metrics for Design Feedback

by Riley M. Steindl Submitted to the MASSACHUSETTS INSTITUTE OF TECHNOLOGY

JUL 16 2019

LIBRARIES

ARCHIVES

Department of Mechanical Engineering

in Partial Fulfillment of the Requirements for the Degree of

Bachelor of Science in Mechanical Engineering

at the

Massachusetts Institute of Technology

June 2019

0 2019 Massachusetts Institute of Technology. All rights reserved.

Signature of Author:

Certified by:

Accepted by:

Signature redacted

U,

Department of Mechanical Engineering

March 18, 2019

-Signature redacted

Maria Yang, PhD Professor of Mechanical Engineering

Signature redacted

Thesis Supervisor

Maria Yang, PhD Professor of Mechanical Engineering

(2)

Choosing Methods and Metrics for Design Feedback

by

Riley M. Steindl

Submitted to the Department of Mechanical Engineering on March 18, 2019 in Partial Fulfillment of the

Requirements for the Degree of

Bachelor of Science in Mechanical Engineering

ABSTRACT

Collecting feedback is an invaluable part of the design process because it helps inform useful and effective changes to products. With a multitude of different methods for gathering feedback and metrics for evaluation, it can be difficult to select the method and/or metrics that will provide the most useful feedback for a given stage of design. There is currently no condensed guide for

selecting methods and metrics for feedback on the bases of stage of design, effort level, and data type.

This study filled this gap through a literature review of 9 academic papers related to design evaluation methods and metrics. Following review, the information in these papers was analyzed to establish how evaluation methods and metrics relate to different stages of design, different data types, and relative effort levels.

The result is description of the relationships between stage of design and suitability, evaluation method and data type, evaluation method and effort level, and evaluation metric vs data type. In all, this study provides useful information for designers to use when selecting the right

evaluation methods and metrics to collect feedback on their designs.

Thesis Supervisor: Maria Yang, PhD Title: Professor of Mechanical Engineering

(3)

ACKNOWLEDGEMENTS

I would like to sincerely thank Antti Surma-Aho, PhD student in the Department of Mechanical

Engineering, for his support and guidance through the entirety of my thesis work. This work would not have been possible without him.

I am also grateful for the help of Dr. Maria Yang, Associate Professor of Mechanical

Engineering, for her advisement and help in arranging my thesis work.

Finally, I would like to thank all of my friends and family for providing moral support and motivation throughout the process of writing this thesis.

(4)

Table of Contents Abstract 2 Acknowledgements 3 Table of Contents 4 List of Figures 5 List of Tables 5 1. Introduction 6

1.1 The Importance of Feedback in the Design Process 6

1.2 Methods of Gathering Design Feedback 6

1.2.1 Surveys 6

1.2.2 User Interviews 6

1.2.3 Heuristic Evaluation 7

1.2.4 Usability Testing 7

1.3 Design Feedback Metrics 7

1.4 Remaining Gap 8 2. Methods 9 2.1 General Methods 9 2.1.1 Evaluation Methods 9 2.1.2 Stages of Design 10 2.1.3 Data Types 10 2.1.4 Effort Levels 10 2.2 Answers Sought 11 3. Results 12

3.1 Stage of Design vs Suitability 12

3.2 Evaluation Metric vs Data Type 13 3.3 Evaluation Method vs Effort Level 14

3.4 Evaluation Method vs Data Type 14

4. Discussion 16

(5)

List of Figures

Figure 1-1: Condensed Evaluation Metrics from 2006 Study

List of Tables TABLE 2-1: TABLE 3-1: TABLE 3-2: TABLE 3-3: TABLE 3-4:

Breakdown of Evaluation Method Categories Stage of Design vs Suitability

Evaluation Metric vs Data Type Evaluation Method vs Effort Level Evaluation Method vs Data Type

8 9 12 13 14 15

(6)

Chapter 1: Introduction

1.1 The Importance of Feedback in the Design Process

After defining a problem or need, one of the first steps of most design processes is to ideate [1-5]. During this step, designers create a plethora of ideas for possible solutions to the problem or need they are trying to address. Following this step, individuals are usually left with a long list of ideas and need to pare it down to a handful of ideas to pursue further. This is the first place where feedback plays a role in the design process. At this point a team of designers can either rely on their own opinions and insights to down select their ideas or gather feedback from others. In most cases, gathering feedback is essential to reduce the effects of biases within a team.

Gathering feedback is useful throughout the design process and can help inform many of the decisions a design team must make regarding their product. Many design processes include a step of gathering feedback as part of the iterative process of improving and refining a design

[1-3,5]. Perhaps the most effective times to utilize feedback are at the milestones between different

iterations of a design project. Whether it be the comparison of sketch models or the tweaking of a more finished prototype, feedback is an effective tool to help designers decide which ideas to pursue or which direction to focus their efforts in refining their design.

1.2 Descriptions of Design Feedback Gathering Methods 1.2.1 Surveys

One method of gathering design feedback is to create and distribute feedback surveys. In general surveys are useful for gathering data from a large group of people. Compared to other methods of gathering feedback, most of the effort involved with surveys revolves around

creating the survey at the start and then analyzing the data at the end. The most important aspect of designing a survey is selecting metrics that will provide the type of data a designer needs to point their design in the right direction. The downside of surveys is that participation is typically voluntary which can lead to certain biases in the data depending on how the survey is executed

[6]. Surveys are most useful when a design is well understood by the respondent.

New-to-the-world designs are likely unfamiliar to a respondent, making it difficult to formulate meaningful questions on a survey that will give useful feedback.

1.2.2 User Interviews

Another means of gathering design feedback is to conduct user interviews. User

interviews can be conducted in various ways. Three typical ways of conducting user interviews are: face-to-face, video, and text [7]. In general, the advantage of user interviews is that they allow designers to target individuals for whom their design is intended. This allows designers to get more specific feedback that can be useful depending on the situation. The major drawback of user interviews is that they are typically more time intensive due to the preparation, scheduling, and interview time involved.

(7)

1.2.3 Heuristic Evaluation

A third method for gathering design feedback is heuristic evaluation. Heuristic evaluation

involves establishing a set of metrics used to evaluate different aspects of a design and then having a group of experts in the field use these metrics to provide feedback on your design [8]. Heuristic evaluation is generally most useful at determining how well a design meets the needs it was designed to meet. A key advantage is that it requires interaction and data collection from fewer individuals than other feedback to get usable feedback. However, a primary drawback of heuristic evaluation is that it limits the scope of your feedback to a smaller sample of people and requires coordinating with experts which may be time intensive and require more compensation depending on the case.

1.2.4 Usability Testing

Finally, a fourth general method for gathering design feedback is usability testing. Usability testing involves having users interact with the design to evaluate how easy it is to interact with to accomplish its intended use [9]. Compared to other methods for gathering feedback, usability testing requires less time because once a user interacts with the design it is very easy to extract valuable feedback. However, a limitation of usability testing is that the designer must have a useable model of the design for the user to interact with. In general, usability testing is a very valuable and easy to execute method for gathering design feedback.

1.3 Design Feedback Metrics

As mentioned above, one of the most important parts of gathering design feedback is picking the right metrics to evaluate your design. A 2006 study [10], compiled the metrics used in 90 studies to condense them down into a list of defined metrics for use in design evaluations. Figure 1-1 below shows the results of the study.

(8)

Dimension Definition

The degree to which an idea is original and modifies a paradigm.

The degree to which the idea is not only rare but is also ingenious, imaginative, or surprising

The degree to which an idea is paradigm preserving (PP) or 12 Paradigm

1.2relatedness paradigm modifying (PM). PM ideas are sometimes radical

or transformational.

2 Workability An idea is workable (feasible) if it can be easily implemented

(Feasibility) and does not violate known constraints.

The degree to which the idea is socially, legally, or politically 2.1 Acceptability acetb.

acceptable.

2.2 Implementability The degree to which the idea can be easily implemented.

~The idea applies to the stated problem and will be effective

3 Relevance~ at solving the problem.

The degree to which the idea clearly applies to the stated

3.1 Applicability polm

3.2 Effectiveness The degree to which the idea will solve the problem. 4 Specificity An idea is specific if it is clear (worked out in detail)

Implicational The degree to which there is a clear relationship between explicitness the recommended action and the expected outcome.

The number of independent subcomponents into which the 4.2 Completeness idea can be decomposed, and the breadth of coverage with

regard to who, what, where, when, why, and how. 4 CThe degree to which the idea is clearly communicated with

3 ClariW regard to grammar and word usage.

Figure 1-1: Condensed Evaluation Metrics from 2006 Study [10]

1.4 Remaining Gap

Based on my background research, I found four gaps in current literature related to design evaluation that would be useful for designers. Currently there are studies related to design evaluation during specific stages of the design process [11-13], methods of gathering feedback

[7], different metrics for design feedback [10], and other more focused topics related to the

design process and feedback. Areas where I determined there was room for improvement were: matching feedback methods to suitable stages of the design process, a summary of what types of data are produced by different feedback metrics, the types of data more easily produced by different feedback methods, and a comparison of the different levels of effort required for different feedback methods. Having this information would be useful because it would help designers understand more of the differences between different feedback gathering methods and metrics related to design evaluation. To fill these gaps, I conducted a literature review of

academic papers related to design feedback. Using Google Scholar and MIT Library resources I found 11 academic papers [7,9-18] and analyzed them as discussed in the methods section below. To simplify my search, I directed my literature review towards answering the four following questions:

(1) What methods of design evaluation are most suitable for different stages of design?

(2) Which evaluation metrics produce different types of data?

(3) How much effort do each form of evaluation require?

(9)

Chapter 2: Methods 2.1 General Methods

I carried out a review of 10 academic papers and journal articles related to design

evaluation. Specifically, I searched for articles related to early, mid, and late-stage prototype evaluation. I utilized Google Scholar and the MIT Libraries search engine as tools to find articles that met my needs. Key search terms included "evaluation of early design prototypes", "design evaluation methods", "methods for evaluating design ideas", and other similar descriptions of the material for which I was searching.

Once I had found 3-5 articles and papers which met my needs for this research, I looked through them and checked the references for each paper for other relevant articles that did not appear in my other searches. After reviewing all 10 of the papers I found, I parsed them for information related to different types of product evaluation methods, evaluation metrics, stages of design, data types, and levels of effort.

2.1.1 Evaluation Methods

For evaluation methods, I found a total of 8 different evaluation methods that are most commonly used to garner feedback during the design process. The 8 most common methods I found were Empirical User Testing, Expert Evaluation, Population Surveys, Sample Surveys, Face-to-Face Interviews, Text-based Interviews, Video/Phone-based interviews, and Thinking Aloud Testing.

Empirical user tests are when designers ask users to complete a set of specific tasks with the product in order to identify problems or pain points [19]. Expert evaluation is when designers have experts help identify problems and evaluate their design based on metrics determined by the designer [19]. For Population and Sample Surveys designers design a questionnaire about their product and distribute it to a sample or population of potential users [19]. User interviews, including text, phone/video, and face-to-face, are when designers talk to users directly to gather feedback in a more conversational format [7]. Finally, Thinking Aloud Testing is a form of usability testing in which a designer observes a user interacting with the product and has them vocalize what they are thinking and doing as they do so to help identify pain points [19].

To eliminate redundancies, I narrowed these 8 methods into 4 categories: Surveys, User Interviews, Heuristic Evaluation, and Usability Testing. To sort the methods into more general categories I grouped methods based on commonalities, such as who the subject is and whether or not the product is involved directly. The breakdown of how each method was sorted is shown in Table 2-1 below.

TABLE 2-1: Breakdown of Evaluation Method Categories

Surveys User Interviews Heuristic Evaluation Usability Testing

Population Face-to-Face Expert Evaluation Empirical User Tests

Sample Video/Phone Thinking Aloud Tests

(10)

For this analysis I defined Surveys to be any voluntary poll or questionnaire given to a sample or population of potential users or the general public. Likewise, I defined User Interviews to be any evaluation method where intended users were questioned individually or in groups in a more open-ended manner. I defined Heuristic Evaluation as any form of evaluation based around obtaining expert opinions on a design or idea. Finally, I defined Usability Testing as any form of testing where data was derived from direct interactions between the design and users.

2.1.2 Stages of Design

For stages of design I kept a list of the common forms of design I found. From there I categorized each group as an early, middle, or late stage. For these purposes, early stage includes anything produced during the ideation stage of product design. As shown in Table 3-1, this is anything that can be done relatively quickly to help create many new ideas or explore

possibilities around one idea, such as blue foam modeling. Middle stage encompasses anything more developed than early stage but not fully fleshed out or complete like late stage models. This includes sketch models, storyboards, and other types of representation used to explore the

functionality and design of an idea beyond simple ideation models. Finally, I defined late stage as any embodiment of the product that captures the intended form, functionality, or both close-to ideally. In general, the stages serve as a gauge of where an idea is in the design process.

Once I categorized all of the stages I rated each evaluation method based on its suitability for each stage of design. I rated them on a scale of not very suitable to somewhat suitable to suitable to very suitable. The suitability of a given method was determined by its ability to provide useful data with comparable amounts of effort. Table 3-1 (below) shows a matrix of these ratings.

2.1.3 Data Types

Similarly, for metrics I kept a list of common evaluation metrics discussed in other papers and then categorized them based on their propensity for creating General, Technical, or Emotional data. I settled on these three categories because based on my research it seemed that Emotional and Technical data are the most distinct and often most useful forms of data, and General was a way to capture metrics that did not fit into the other two categories. I chose to keep these three categories more general because if I were more specific I would be getting too close to the level of specificity of the metrics I was categorizing.

I used Emotional to denote any metrics that are likely to elicit subjective data related to a

user's feelings and opinions about the product. Secondly, I used Technical to categorize any metric that is likely to product more objective data related to raw facts about the idea rather than impressions or opinions. Finally, as mentioned above, General was used to describe any metric that would not necessarily create data that fit into the other two categories.

2.1.4 Effort Levels

Next, I considered the relative effort levels of each evaluation method. I rated each on a five-point scale from Low to High effort. Each method was rated based on consideration of the amount of time necessary to obtain useful data. The benchmark used for comparison was how much effort it would take for one tester to gather data from one individual. This benchmark was

(11)

selected because it did not give an unfair advantage to any test in particular and the results scale well with increased sample size as well.

2.2 Answers Sought

To answer the four questions stated above I set out to establish four relationships. Using the information that I compiled in my literature review, I set out to define four relationships. These relationships were:

(1) Stage of Design vs Suitability

(2) Evaluation Method vs Data Type

(3) Evaluation Method vs Effort Level

(12)

Chapter 3: Results

3.1 Stage of Design vs Suitability

As discussed above, I first considered the suitability of different feedback gathering methods for different stages of design. For example, heuristic evaluation is not very suitable for an early-stage prototype because during this stage ideas are still poorly defined and developed. A few expert opinions are not the best way to compare or evaluate ideas at this stage because ideas are not fully formed at this stage. This low fidelity of design makes it hard to take full advantage of the knowledge possessed by experts. Instead, a good method to get useful information is through user interviews. This is because with similar or less effort than gathering expert

opinions, one can get conduct an in-person interview and gather useful information that can help inform their decisions. Table 3-1 (below) summarizes the results of this analysis.

TABLE 3-1: Stage of Design vs Suitability

Stage Early Middle Late

Blue Foam Models, Sketch Models, Looks-Like,

Works-Examples Sketches Storyboards, Proof of

Like, Alpha Concept

Surveys Somewhat suitable Suitable Somewhat suitable

User Interviews Suitable Suitable Suitable

Heuristic Evaluation Not very suitable Somewhat suitable Very suitable

Usability Testing Not very suitable Somewhat Suitable Very Suitable

As shown above, surveys are generally most useful for gathering feedback on early stage models. This supposition is based on the fact that the study by Dean and Hender [10] as well as the one by Kudrowitz and Wallace [11] set out to determine the best metrics for evaluating early models which supports the idea that surveys, which utilize such metrics, are most suitable for evaluating such ideas. Likewise, based on the work of Ma, Yu, Forlizzi, and Dow user interviews and surveys are arguably most useful for evaluating middle stage models [7] when compared to heuristic evaluation and usability testing.

The reason that heuristic evaluation and usability testing are rated as not very suitable for early stage and only somewhat suitable for middle stage is that both forms of evaluation require more functional and/or complete models [8,19]. Along that same line of reasoning, because late stage models are by definition more functional and/or developed, heuristic evaluation and usability testing are most suitable for late stage models.

(13)

3.2 Evaluation Metric vs Data Type

In Table 3-2 (below), I have shown the categorization of 13 commonly used metrics. I selected these metrics from the results of another study by Dean and Hender [10]. This review analyzed 90 prior studies to synthesize all of the methods used in each and condense them down into the most commonly used. Based on my own research, I found that the 13 metrics given by that study accurately encompassed nearly metrics used in other design evaluation studies. Metrics that fall into two categories do so because the type of data they produce can vary depending on how they are used. For example, acceptability could create emotional data if used in the context of a single user or technical data if used in the context of a usability test or heuristic evaluation. The distinctions between data types were made based on the definitions of each term established in the study by Dean and Hender in 2006 [10] and how those definitions related to the three categories.

TABLE 3-2: Evaluation Metric vs Data Type

Metric\Data Type Emotional Technical General

Novelty x Originality x Paradigm Relatedness x Workability/Feasibility x Acceptability x x Implementability x Relevance x x Applicability x Effectiveness x x Specificity x Implicational Explicitness Completeness x Clarity x x

(14)

3.3 Evaluation Method vs Effort Level

Table 3-3 (below) shows the results for the comparison of effort levels for the designers between different evaluation methods. Each method has a range of effort levels that it falls under. This range helps encompass how demanding each method can be depending on how intensive a designer chooses to be. For example, usability testing can be done in a very quick and simple manner to get data, hence the low effort level on the bottom end. However, should they choose to do so, a designer can pursue more intensive and controlled user testing that requires more effort, resulting in a medium effort level rating on the upper end.

TABLE 3-3: Evaluation Method vs Effort Level

Method\Effort Low Medium High

Level Usability Testing Heuristic Evaluation X X X Surveys x x x User Interviews x x x

As shown above usability testing and surveys fall in the low to medium effort levels. This is based on the fact that both can easily be carried out in "quick and dirty" ways on the low end and controlled and calculated ways that require more effort [6,19]. Moving up the effort scale, heuristic evaluation falls around the middle because in general it requires a medium amount of energy between formulating the necessary evaluation metrics and coordinating with relevant experts [8]. Finally, towards the top of the effort scale is user interviews. User interviews range from medium to high effort levels because on the lower end formulating the interview questions properly, contacting users, and taking the time to perform the interviews can be quite time and effort intensive [20,2 1]. On the upper end, user interviews can be very exhaustive and require high effort levels when many interviews must be conducted and many different aspects of the product need to be addressed.

3.4 Evaluation Method vs Data Type

Table 3-4 (below) shows the results of my comparison of what types of data each evaluation method is most likely to produce. For example, user interviews are most likely to produce emotional data because of the direct, conversational interaction with users involved. On the other hand, heuristic evaluation and usability testing are more likely to produce technical data because of the more functionality focused evaluation involved. Finally, sample surveys are categorized under general because the types of data they are likely to produce can vary greatly depending on the questions asked and the population questioned.

(15)

TABLE 3-4: Evaluation Method vs Data Type

Data Type Emotional Technical General

Methods User Interviews Heuristic Evaluation, Surveys Usability Testing

As summarized above in Table 3-4, user interviews are most likely to produce emotional data. This is a result of the face-to-face and conversational style common among user interviews. This format lends itself to capturing emotional data because it allows the interviewer to more easily pick up on facial and vocal cues [7]. Moving on, both heuristic evaluation and usability testing have a propensity for creating technical data. Heuristic evaluation typically utilizes experts to evaluate different technical characteristics and usability testing draws on direct interaction with the product to test its functionality and identify underlying technical problems

[8,18,19]. Lastly, surveys fell into the general category because in practice they are a very

flexible tool and the types of data they can collect depends heavily on the design of the survey and the questions asked.

(16)

Chapter 4: Discussion

In this study, answers were provided for these four questions:

(1) What methods of design evaluation are most suitable for different stages of design?

For the early stages of design, user interviews are the most effective methods for gathering useful information. Surveys also have somewhat limited utility depending on the nature of the design. For middle stage designs, surveys and interviews are still suitable, and additionally heuristic evaluation and usability testing gain slight utility depending on if there are aspects of the design that can be evaluated with these methods. Finally, for late stage designs heuristic evaluation and usability testing come to the forefront of useful methods. Surveys and user interviews maintain moderate levels of utility for late stage designs as well.

(2) Which evaluation metrics produce different types of data?

Metrics that favor emotional data generally relate more towards subjectivity and the experience of individual users. On the other hand, metrics that favor technical data tend to be more objective and measurable aspects of a design. Lastly, metrics that have a propensity for general or multiple data types tend to be either objective or subjective depending on the context they in which they are used. Please refer to Table 3-2 for more specific breakdowns of the metrics and data types analyzed.

(3) How much effort do each form of evaluation require?

In general, usability tests and surveys can require less effort relative to other methods. The only caveat being that usability testing requires that designers possess a usable version of the product with which to conduct testing. Next on the effort scale is heuristic evaluation which tends to require a more medium level of effort. Heuristic evaluation also requires a more developed design to be effectively utilized. Finally, user interviews were shown to require more effort relative to the other methods due to the extra time and consideration necessary to coordinate and meet with users and create effective questions for use during each interview. As shown in Table 3-3, the effort required for each of these methods varies depending on its implementation allowing for some overlap in

comparative effort between each method.

(4) What kinds of useful data do different evaluation methods typically produce?

It was determined that in general user interviews lent utility to collecting emotional data as a result of the more personal and conversational format. On the other end, heuristic evaluation and usability testing were found to have a greater propensity for producing technical data as a result of how each method is carried out. Finally, surveys fell into the general data category because the types of data collected by surveys is highly dependent on how the survey is designed.

(17)

This work can serve as a guide and thinking tool to help designers consider the types of data they are looking to acquire and the methods that are most appropriate to gather that data. The

limitations of this study were the relatively small number of academic papers that were reviewed and synthesized, and the fact that the assessment was conducted by only one person. Any effort to build on the work done here should focus on finding and analyzing more articles. Such articles would serve to add detail and additional data to further support or to inform changes to the results of this study. While limited by the total number of papers reviewed, the greatest value of this study is the concentration of information related to design feedback. The inforiation

presented here can help inform and guide designers to ask the right questions in the right ways to better improve their designs.

(18)

References

[1] Braha, D., and Maimon, 0., 1997, "The Design Process: Properties, Paradigms, and

Structure," IEEE Transactions on Systems, Man, and Cybernetics -Part A: Systems and Humans, 27(2), pp. 146-166.

[2] "DiscoverDesign Handbook," DiscoverDesign [Online]. Available: /handbook. [Accessed: 15-Jan-2019].

[3] "ZURB I Design Process, A Design Definition" [Online]. Available:

https://zurb.com/word/design-process. [Accessed: 15-Jan-2019]. [4] "Stages in the Design Process" [Online]. Available:

http://www.sedl.org/scimath/compass/v02nO3/stages.html. [Accessed: 15-Jan-2019].

[5] "1.3: What Is the Engineering Design Process?" [Online]. Available:

what-is-the-engineering-design-process.html. [Accessed: 15-Jan-2019].

[6] Rattray, K., "10 Key Things To Consider When Designing Surveys" [Online]. Available:

https://www.surveygizmo.com/resources/blog/designing-surveys/. [Accessed: 15-Jan-2019].

[7] Ma, X., Yu, L., Forlizzi, J. L., and Dow, S. P., 2015, "Exiting the Design Studio: Leveraging

Online Participants for Early-Stage Design Feedback," Proceedings of the 18th A CM

Conference on Computer Supported Cooperative Work & Social Computing -CSCW '15,

ACM Press, Vancouver, BC, Canada, pp. 676-685.

[8] "Heuristic Evaluation: How-To: Article by Jakob Nielsen" [Online]. Available:

https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/. [Accessed: 10-Jan-2019].

[9] De Kock, E., Biljon, J., and Pretorius, M., 2009, "Usability Evaluation Methods: Mind the

Gaps," pp. 122-131.

[10] Brigham Young University, Dean, D., Hender, J., Henley Management College, Rodgers,

T., Consultant College Station, Texas, Santanen, E., and Bucknell University, 2006,

"Identifying Quality, Novel, and Creative Ideas: Constructs and Scales for Idea Evaluation," Journal of the Association for Information Systems, 7(10), pp. 646-699.

[11] Kudrowitz, B. M., and Wallace, D., 2013, "Assessing the Quality of Ideas from Prolific,

Early-Stage Product Ideation," Journal of Engineering Design, 24(2), pp. 120-139.

[12] Besemer, S. P., and O'Quin, K., 1999, "Confirming the Three-Factor Creative Product Analysis Matrix Model in an American Sample," Creativity Research Journal, 12(4), pp.

287-296.

[13] Liedtka, J., 2015, "Perspective: Linking Design Thinking with Innovation Outcomes through Cognitive Bias Reduction: Design Thinking," Journal of Product Innovation Management, 32(6), pp. 925-938.

[14] Haggman, A., Tsai, G., Elsen, C., Honda, T., and Yang, M. C., 2015, "Connections

Between the Design Tool, Design Attributes, and User Preferences in Early Stage Design," J. Mech. Des, 137(7), pp. 071408-071408-13.

[15] Sauro, J., and Kindlund, E., 2005, "A Method to Standardize Usability Metrics into a Single Score," Proceedings of the SIGCHI Conference on Human Factors in Computing

Systems - CHI '05, ACM Press, Portland, Oregon, USA, p. 401.

[16] Kitchenham, B. A., Pickard, L. M., and Linkman, S. J., 1990, "An Evaluation of Some Design Metrics," Software Engineering Journal, 5(1), pp. 50-58.

[17] Shah, J. J., Kulkarni, S. V., and Vargas-Hernandez, N., 2000, "Evaluation of Idea Generation Methods for Conceptual Design: Effectiveness Metrics and Design of

(19)

[18] Law, E. L.-C., and Hvannberg, E. T., 2004, "Analysis of Strategies for Improving and Estimating the Effectiveness of Heuristic Evaluation," Proceedings of the Third Nordic

Conference on Human-Computer Interaction -NordiCHI '04, ACM Press, Tampere,

Finland, pp. 241-250.

[19] Farkas, D. K., "Evaluation and Usability Testing," p. 7.

[20] "User Interviews: How, When, and Why to Conduct Them," Nielsen Norman Group [Online]. Available: https://www.nngroup.com/articles/user-interviews/. [Accessed:

17-Jan-2019].

[21] "An Excellent Step-By-Step Guide To Conducting User Research" [Online]. Available: https://careerfoundry.com/en/blog/ux-design/how-to-conduct-user-experience-research-like-a-professional/. [Accessed: 17-Jan-2019].

Figure

Table  of Contents Abstract  2 Acknowledgements  3 Table  of Contents  4 List of Figures  5 List of Tables  5 1
Figure  1-1:  Condensed  Evaluation Metrics  from 2006  Study  [10]
TABLE  2-1:  Breakdown  of Evaluation Method  Categories
TABLE  3-1:  Stage  of Design  vs  Suitability
+4

Références

Documents relatifs

A robust systematic framework is needed to feed-forward appropriate manufacturing processes to the next higher-level assessment as well as to provide feedback to

The system switches to amplification level 2 when the user wants to increase speaking voice loudness in a loud environment by decreasing heard voice loudness. If the user wants to

With the purpose of guaranteeing the specifications defined by our industrial partner, the robotic solution would need at least 4 DoF, concerning the translation along the

The EOM has been designed to calculate the position on the 3D space, yet the information is displayed to the Learner as an augmented path in 2D space; a 2D line that

His research interests are centred on the preliminary stages of product design: defining the design requirements, synthesising the product concepts, the rapid

(2) Les résultats expérimentaux disponibles pour cette catégorie de pieux dans les sables mettent en évidence une variation du facteur de portance en pointe avec la

by (i) providing explicit formulas for our control bounds in our theorem, (ii) showing how the converging-input- converging-state assumption of our theorem can be satisfied

Based on the results, the numerical results show a significant reduction in the bending moment and increasing of the displacement in the tunnel lining taking into account