International Workshop on the Interplay between User Experience Evaluation and System Development
- 0 comments
- 1191 Visits
We understand the relationship between UX and usability as the latter is subsumed by the former. Usability evaluation methods (UEMs) and metrics are relatively more mature. In contrast, UX evaluation methods (UXEMs) which draw largely on UEMs are still taking shape. It is conceivable that feeding outcomes of UX evaluation back to the software development cycle to instigate the required changes can even be more challenging than doing so for usability evaluation (UE). It leads to several key issues.
- UX attributes are (much) more fuzzy and malleable, what kinds of diagnostic information and improvement suggestion can be drawn from evaluation data. For instance, a game can be perceived by the same person as a great fun on one day and a terrible boredom the following day, depending on the player's prevailing mood. The waning of novelty effect (cf. learnability differs over time in case of usability) can account for the difference as well. How does the evaluation feedback enable designers/developers to fix this experiential problem (cf. usability problem) and how can they know that their fix works (i.e. downstream utility)?
- Emphasis is put on conducting UE in the early phases of a development lifecycle with the use of low fidelity prototypes, thereby enabling feedback to be incorporated before it becomes too late or costly to make changes. However, is this principle applicable to UX evaluation? Is it feasible to capture authentic experiential responses with a low-fidelity prototype? If yes, how can we draw insights from these responses?
- The persuasiveness of empirical feedback determines its worth. Earlier research indicates that the development team needs to be convinced about the urgency and necessity of fixing usability problems. Is UX evaluation feedback less persuasive than usability feedback? If yes, will the impact of UX evaluation be weaker than UE?
- The Software Engineering (SE) community has recognized the importance of usability. Efforts are focused on explaining the implications of usability for requirements gathering, software architecture design, and the selection of software components. Can such recognition and implications be taken for granted for UX, as UX evaluation methodologies and measures could be very different (e.g. artistic performance)?
- How to translate observational or inspectional data into prioritised usability problems or redesign proposals is thinly documented in the literature. Analysis approaches developed by researchers are applied to a limited extent by practitioners. Such divorce between research and practice could be bitterer in UX analysis approaches, which are essentially lacking.
While the gap between HCI and SE with regard to usability has somewhat been narrowed, it may be widened again due to the emergence of UX.
The main goal of I-UxSED 2012 is to bring together people from HCI and SE to identify challenges and plausible resolutions to optimize the impact of UX evaluation feedback on software development.