CSAIL Research Abstracts - 2005 link to http://publications.csail.mit.edu/abstracts/abstracts05/index.html link to http://www.csail.mit.edu
bullet Introduction bullet Architecture, Systems
& Networks
bullet Language, Learning,
Vision & Graphics
bullet Physical, Biological
& Social Systems
bullet Theory bullet

horizontal line

Recompiling Utility Functions in a Changing World

Michael McGeachie & Jon Doyle

Decision Support and Preference

While classical decision theory can appear too limited or too rigorous to support decision-making in dynamic domains and under changing conditions, many decision-theoretic formalisms have great conceptual benefit. Researchers in artificial intelligence have made several formulations of qualitative decision theory [1,5,6], applicable in domains where accurate probabilities and time consuming preference elicitation techniques are either unavailable or undesirable. Such qualitative preference formulations allow decision makers to make natural statements, representing his or her own intuitions about the decision space and easing the preference elicitation task. However, when situations change, or the preferences themselves change, these reasoning systems must solve their problem again. To ease this burden, we investigate the conditions under which partial recompilation is both possible and helpful.

Including qualitative preferences in decision support systems allows personalization. Decision support systems that aim to help more than one decision maker need a way to represent the differences between different users. The most direct method is to represent their preferences. By preferences we mean the desires, tastes, and priorities that a person has, and that furthermore differ from person to person, causing two different people faced with the same decision to make different choices.

Ceteris Paribus Preferences in Dynamic Domains

Wellman and Doyle [6], have observed that qualitative representations of preferences are a succinct and reasonable approximation of at least one type of common human preferences. Doyle, Shoham, and Wellman [2], present a theoretical formulation of human preferences of generalization in terms of ceteris paribus preferences, i.e., all-else-equal preferences. Ceteris paribus relations express a preference over sets of possible worlds. We consider all possible worlds (or outcomes) to be describable by some (large) set of binary features F. Then each ceteris paribus rule specifies some features of outcomes, and a preference over them, while ignoring the remaining features. The specified features are instantiated to either true or false, while the ignored features are "fixed," or held constant. A ceteris paribus rule might be "we prefer programming tutors receiving an A in Software Engineering to tutors not receiving an A, other things being equal." In this example, we can imagine a universe of computer science tutors, each describable by some set of binary features F. Perhaps F = {Graduated, SoftwareEngineering_A, ComputerSystems_A, Cambridge_resident, Willing_to_work_on_Tuesdays, ...}. The preferences expressed above state that, for a particular computer science tutor, they are more desirable if they received an A in the Software Engineering course, all other features being equal.

Our previous work includes a methodology for going from preferences over a domain to making decisions in that domain [4]. Our framework takes a set of ceteris paribus and compiles a utility function consistent with those preferences. This approach has several properties amenable to functioning in a dynamic domain. Firstly, we allow some latitude in the way a user is able to specify preferences. These preferences my well be incomplete, may not refer to all variables in the domain, may be conditional or unconditional, and may exhibit both utility dependencies and utility independencies [3]. Secondly, we can add variables to the domain without altering or invalidating the existing preferences. Removing variables is also easy, however the existing preferences referencing those variables must be truncated, potentially shifting their meaning.

The method is roughly as follows. First they take a set of ceteris paribus preferences S and consider what utility independence assumptions are consistent with S. A partition C of the feature space is found. The preferences S are used to construct linear subutility functions ui. Then a system of linear inequalities I is constructed that constrains the scaling parameters ti of the utility function. In this manner, a linear additive utility function is created.

Results

Our representation of preference is suited to two kinds of preference change: utility independence changes, and changes in the underlying feature space.

Our representation of preference, while utilizing utility independence of features as a major computational expedient, makes no semantic or syntactic commitment to utility independence. This allows our representation to reason efficiently with preferences that exhibit substantial utility independence and reason less efficiently with inherently more complicated, utility dependent preferences, as the particular domain and problem require. Furthermore, a user's preferences may change in such a way that some features that were utility independent become dependent, and vice versa. Our system allows this shift without requiring specialized semantics or syntax.

In general, preference changes are a source of instability for our algorithm. If preferences change arbitrarily, then we have to repeat the work already done to build a utility function, and build a new one largely unrelated to the previous utility function. However, our criterion of changing preferences participating in cycles on some subutility function characterizes the cases where a complete recompile is not necessary. For the following, we say that a preference statement r changes between C and C' if it is either not in C or it is not in C'.

Theorem 1 (Cycle Agnosticism) For all preference statements r changing between C and C', if r is such that it is a member of no cycle on any subutility function ui built from preferences C or any subutility function ui' built from preferences C', then building utility function u' from C' does not require recompilation in the utility function construction algorithm.}

This theorem says that we are free to keep the subutility functions and scaling parameter assignments of a utility function u for C when building u' for C', when C' obeys the proper conditions. In general, this avoids the computationally inefficient parts of the algorithm.

Future

Performing preference elicitation and preference reasoning in a dynamic environment presents a significant challenge to our work. Ideally, we feel our system should adapt better to changing or additional preferences. Adding preferences to our formulation currently requires a recompile of our utility function, which can be computationally costly. Future work will investigate partial compilation methods and alternative representations with better incremental compilation properties.

Research Support

The author is supported in part by National Library of Medicine (NLM) Training grant T15-LM07092 and National Multi-Protocol Ensemble for Self-Scaling Systems for Health (NMESH) grant N01-LM-3-3515.

References:

[1] C. Boutilier, R. Brafman, H. Hoos, and D. Poole. Reasoning with conditional ceteris paribus preference statements. In Proceedings of Uncertainty in Artificial Intelligence, pp 71--80 1999.

[2] Jon Doyle, Yoav Shoham and Michael P. Wellman. A logic of relative desire (preliminary report). In Z. Ras, ed, Proceedings of the 6th International Symposium on Methodologies for Intelligent Systems, Lecture Notes in Computer Science, pp. 16--31. Berlin, Springer-Verlag. 1991.

[3] Michael McGeachie and Jon Doyle. Efficient utility functions for ceteris paribus preferences. In Proceedings of the 18th National Conference on Artificial Intelligence, pp. 279--284, Edmonton, Alberta, 2002.

[4] Michael McGeachie and Jon Doyle. Utility functions for ceteris paribus preferences. Computational Intelligence: Special Issue on Preferences, 20(2), pp. 158--217, 2004.

[5] S.-W. Tan and J. Pearl. Qualitative Decision Theory. In Proceedings of the twelfth national conference on Artificial Intelligence, pp. 928--933, Seattle, WA, 1994.

[6] Michael P. Wellman and Jon Doyle. Preferential semantics for goals. In T. Dean and K. McKeown, eds, Proceedings of the 9th National Conference on Artificial Intelligence, pp. 698--703, 1991.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu
(Note: On July 1, 2003, the AI Lab and LCS merged to form CSAIL.)