Intelligibility in Context-Aware Applications

...

Motivation

...

Assessing demand

...

Intelligibility Design Recommendations

Based on the design recommendations in Lim & Dey 2009, we provide a table of recommendation to designers and developers of context-aware applications. The recommendations are derived from survey data of participant responses and the resulting analysis. They can use this table to determine which types of intelligibility explanations to include in their applications depending on the circumstances their applications would encounter. For example, if the application is not very accurate, it would have low Appropriateness, and we would recommend the explanation types: Why, Why Not, How, What If, and Control.

Instructions on usage

Select the checkbox or radio buttons as according to how your candidate context-aware application is defined (e.g. whether it has high criticality, etc). This will highlight the respective explanation types recommended for your application. You can mouse over the keywords in the table for the definitions of what they mean.

Explanation Type General
Appropriateness Criticality Goal-Supportive Recommendation Externalities
LowHigh LowHigh LowHigh LowHigh LowHigh
ApplicationInput       +        +
Output       ++     +   
ModelWhy + ++ +++         
Why Not   +   +  ++     ++
How + ++ ++    +  + 
What If       +    +  + 
What Else       ++         
Certainty +     ++  +      
Control + ++   ++         
Situation       ++         
Select this option for recommendations for context-aware applications, in general. Whether the application tends to be accurate, or behaves appropriately.
E.g. an accuracy of <80% for recognizing falls may be considered to be of low Appropriateness.
Whether the situation presented is critical.
Situations involving accidents or medical concerns, or maybe work-related urgency can be considered highly critical.
Whether the situation is motivated by a goal the user has. Whether the application is recommending information for the user to follow or ignore. Whether the application is perceived to have high external dependencies
(e.g., getting weather information from a weather radio station) vs. being perceived as “self-contained.”
Explanations about the application, what it does, how it works, etc. What sensors or input sources the application uses/used and what their values are/were. What outputs, options, alternative actions the application can produce.
E.g. What accidents can the system sense?
Explanations about the conceptual model of the application. Why the system behaved the way it did for a specific event/action.
E.g. Why did the system report a fall?
Why the system did not behave another way for a specific event/action.
Normally asked when the user's expectation does not match the system behavior.
E.g. Why did the system not report a fire?
How the application achieves a decision or output action.
This is more general than the Why question.
E.g. How does the system distinguish a between a falling object and person?
Explanations about what would happen if an alternative circumstance or input values were present.
E.g. If an object falls, would the system report a fall?
What else the application has done / is doing other than what has been told.
E.g. Did the system alert emergency services of the accident?
Description of how confident the application is of its decision (recognition, interpretation, etc).
How accurate it is for an action.
How the user can change parameters for more appropriate application behavior, override, etc.
E.g. How can I change settings to control the sensitivity for reports?
Explanations to provide users with more situational awareness,
to get more information about the situation, environment, or people, rather than about the application.
E.g. What was the family member doing before the accident?

Lab studies

...

Field studies

...

Toolkit

...


References

Lim, B. Y., Dey, A. K. Assessing Demand for Intelligibility in Context-Aware Applications. In Proceedings of the 11th international Conference on Ubiquitous Computing (Orlando, Florida, USA, September 30 - October 03, 2009). Ubicomp '09. ACM, New York, NY, 195-204. DOI= http://doi.acm.org/10.1145/1620545.1620576. pdf

Lim, B. Y.,, Dey, A. K., Avrahami, D. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI '09. ACM, New York, NY, 2119-2128. DOI= http://doi.acm.org/10.1145/1518701.1519023. pdf (Nominated for Best Paper)