Association rule learning 577053 225784644 2008-07-15T11:33:24Z 199.46.199.231 Onions are vegetables. By switching it to a list of two vegies, it reduces redundancy of terms and makes sense. In [[data mining]], '''association rule learning''' is a popular and well researched method for discovering interesting relations between variables in large databases. Piatetsky-Shapiro <ref name=piatetsky>Piatetsky-Shapiro, G. (1991), Discovery, analysis, and presentation of strong rules, in G. Piatetsky-Shapiro & W. J. Frawley, eds, ‘Knowledge Discovery in Databases’, AAAI/MIT Press, Cambridge, MA. </ref> describes analyzing and presenting strong rules discovered in databases using different measures of interestingness. Based on the concept of strong rules, Agrawal et al. <ref name=mining>R. Agrawal; T. Imielinski; A. Swami: ''Mining Association Rules Between Sets of Items in Large Databases", SIGMOD Conference 1993: 207-216</ref> introduced association rules for discovering regularities between products in large scale transaction data recorded by [[point-of-sale]] (POS) systems in supermarkets. For example, the rule <math>\{onions, potatoes\} \Rightarrow \{beef\}</math> found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, he or she is likely to also buy beef. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional [[pricing]] or [[product placements]]. In addition to the above example from [[market basket analysis]] association rules are employed today in many application areas including [[Web usage mining]], [[intrusion detection]] and [[bioinformatics]]. == Definition == Following the original definition by Agrawal et al <ref name=mining>R. Agrawal; T. Imielinski; A. Swami: ''Mining Association Rules Between Sets of Items in Large Databases", SIGMOD Conference 1993: 207-216</ref> the problem of association rule mining is defined as: Let <math>I=\{i_1, i_2,\ldots,i_n\}</math> be a set of <math>n</math> binary attributes called ''items''. Let <math>D = \{t_1, t_2, \ldots, t_m\}</math> be a set of transactions called the ''database''. Each transaction in <math>D</math> has a unique transaction ID and contains a subset of the items in <math>I</math>. A ''rule'' is defined as an implication of the form <math>X \Rightarrow Y</math> where <math>X, Y \subseteq I</math> and <math>X \cap Y = \emptyset</math>. The sets of items (for short ''itemsets'') <math>X</math> and <math>Y</math> are called ''antecedent'' (left-hand-side or LHS) and ''consequent'' (right-hand-side or RHS) of the rule. {|class="wikitable" style="float: right;" |+ Example data base with 4 items and 5 transactions |- ! transaction ID !! milk !! bread !! butter !! beer |- | 1 || 1 || 1 || 0 || 0 |- | 2 || 0 || 1 || 1 || 0 |- | 3 || 0 || 0 || 0 || 1 |- | 4 || 1 || 1 || 1 || 0 |- | 5 || 0 || 1 || 0 || 0 |- |} To illustrate the concepts, we use a small example from the supermarket domain. The set of items is <math>I= \{\mathrm{milk, bread, butter, beer}\}</math> and a small database containing the items (1 codes presence and 0 absence of an item in a transaction) is shown in the table to the right. An example rule for the supermarket could be <math>\{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\}</math> meaning that if milk and bread is bought, customers also buy butter. Note: this example is extremely small. In practical applications, a rule needs a support of several hundred itemsets before it can be considered statistically significant, and datasets often contain thousands or millions of itemsets. To select interesting rules from the set of all possible rules, constraints on various measures of significance and interest can be used. The best-known constraints are minimum thresholds on support and confidence. The ''support'' <math>\mathrm{supp}(X)</math> of an itemset <math>X</math> is defined as the proportion of transactions in the data set which contain the itemset. In the example database, the itemset <math>\{\mathrm{milk, bread}\}</math> has a support of <math>2/5=0.4</math> since it occurs in 40% of all transactions (2 out of 5 transactions). The ''confidence'' of a rule is defined <math>\mathrm{conf}(X\Rightarrow Y) = \mathrm{supp}(X \cup Y) / \mathrm{supp}(X)</math>. For example, the rule <math>\{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\}</math> has a confidence of <math>0.2/0.4=0.5</math> in the database, which means that for 50% of the transactions containing milk and bread the rule is correct. Confidence can be interpreted as an estimate of the probability <math>P(Y|X)</math>, the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS <ref name=hipp>Jochen Hipp, Ulrich Güntzer, and Gholamreza Nakhaeizadeh. Algorithms for association rule mining - A general survey and comparison. SIGKDD Explorations, 2(2):1-58, 2000.</ref>. The ''lift'' of a rule is defined as <math> \mathrm{lift}(X\Rightarrow Y) = \frac{ \mathrm{supp}(X \cup Y)}{ \mathrm{supp}(Y) * \mathrm{supp}(X) } </math> or the ratio of the observed confidence to that expected by chance. The rule <math>\{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\}</math> has a lift of <math>\frac{0.2}{0.4*0.4} = 1.25 </math>. The ''conviction'' of a rule is defined as <math> \mathrm{conv}(X\Rightarrow Y) =\frac{ 1 - \mathrm{supp}(Y) }{ 1 - \mathrm{conf}(X\Rightarrow Y)}</math>. The rule <math>\{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\}</math> has a conviction of <math>\frac{1 - 0.4}{1 - .5} = 1.2 </math>, and be interpreted as the ratio of the expected frequency that X occurs without Y if they were independent to the observed frequency. Association rules are required to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. To achieve this, association rule generation is a two-step process. First, minimum support is applied to find all ''frequent itemsets'' in a database. In a second step, these frequent itemsets and the minimum confidence constraint are used to form rules. While the second step is straight forward, the first step needs more attention. <!-- a picture of the itemset lattice would be nice --> Finding all frequent itemsets in a database is difficult since it involves searching all possible itemsets (item combinations). The set of possible itemsets is the [[power set]] over <math>I</math> and has size <math>2^n-1</math> (excluding the empty set which is not a valid itemset). Although the size of the powerset grows exponentially in the number of items <math>n</math> in <math>I</math>, efficient search is possible using the ''downward-closure property'' of support <ref name=mining>R. Agrawal; T. Imielinski; A. Swami: ''Mining Association Rules Between Sets of Items in Large Databases", SIGMOD Conference 1993: 207-216</ref>(also called ''anti-monotonicity''<ref name=pei>Jian Pei, Jiawei Han, and Laks V.S. Lakshmanan. Mining frequent itemsets with convertible constraints. In Proceedings of the 17th International Conference on Data Engineering, April 2-6, 2001, Heidelberg, Germany, pages 433-442, 2001.</ref>) which guarantees that for a frequent itemset also all its subsets are frequent and thus for an infrequent itemset, all its supersets must be infrequent. Exploiting this property, efficient algorithms (e.g., Apriori <ref name=apriori>Rakesh Agrawal and Ramakrishnan Srikant. Fast algorithms for mining association rules in large databases. In Jorge B. Bocca, Matthias Jarke, and Carlo Zaniolo, editors, Proceedings of the 20th International Conference on Very Large Data Bases, VLDB, pages 487-499, Santiago, Chile, September 1994.</ref> and Eclat <ref name=eclat>Mohammed J. Zaki. Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering, 12(3):372-390, May/June 2000.</ref>) can find all frequent itemsets. == Alternative Measures of Interestingness == <!-- would be nice to explain each measure --> Next to confidence also other measures of interestingness for rules were proposed. Some popular measures are: * All-confidence <ref name=allconfidence>Edward R. Omiecinski. Alternative interest measures for mining associations in databases. IEEE Transactions on Knowledge and Data Engineering, 15(1):57-69, Jan/Feb 2003.</ref> * Collective strength <ref name=collectivestrength> C. C. Aggarwal and P. S. Yu. A new framework for itemset generation. In PODS 98, Symposium on Principles of Database Systems, pages 18-24, Seattle, WA, USA, 1998.</ref> * Conviction <ref name=conviction> Sergey Brin, Rajeev Motwani, Jeffrey D. Ullman, and Shalom Tsur. Dynamic itemset counting and implication rules for market basket data. In SIGMOD 1997, Proceedings ACM SIGMOD International Conference on Management of Data, pages 255-264, Tucson, Arizona, USA, May 1997.</ref> * Leverage <ref name=leverage>Piatetsky-Shapiro, G., Discovery, analysis, and presentation of strong rules. Knowledge Discovery in Databases, 1991: p. 229-248.</ref> * Lift (originally called interest) <ref name=lift> S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket data. In Proc. of the ACM SIGMOD Int'l Conf. on Management of Data (ACM SIGMOD '97), pages 265-276, 1997.</ref> A definition of these measures can be found [http://michael.hahsler.net/research/association_rules/measures.html here]. Several more measures are presented and compared by Tan et al.<ref name=measurescomp>Pang-Ning Tan, Vipin Kumar, and Jaideep Srivastava. Selecting the right objective measure for association analysis. Information Systems, 29(4):293-313, 2004.</ref> == Algorithms == Many algorithms for generating association rules were presented over time. Some well known algorithms are Apriori, Eclat and FP-Growth. === Apriori algorithm === {{seemain|Apriori algorithm}} Apriori<ref name=apriori>Rakesh Agrawal and Ramakrishnan Srikant. Fast algorithms for mining association rules in large databases. Proceedings of the 20th International Conference on Very Large Data Bases, VLDB, pages 487-499, Santiago, Chile, September 1994.</ref> is the best-known algorithm to mine association rules. It uses a breadth-first search strategy to counting the support of itemsets and uses a candidate generation function which exploits the downward closure property of support. === Eclat algorithm === Eclat<ref name=eclat>Mohammed J. Zaki. Scalable algorithms for association mining. IEEE Transactions on Knowledge and Data Engineering 12(3):372-390, May/June 2000.</ref> is a depth-first search algorithm using set intersection. === FP-Growth algorithm === FP-growth (frequent pattern growth)<ref name="fp-growth">Jiawei Han, Jian Pei, Yiwen Yin, and Runying Mao. Mining frequent patterns without candidate generation. Data Mining and Knowledge Discovery 8:53-87, 2004.</ref> uses an extended prefix-tree (FP-tree) structure to store the database in a compressed form. FP-growth adopts a divide-and-conquer approach to decompose both the mining tasks and the databases. It uses a pattern fragment growth method to avoid the costly process of candidate generation and testing used by Apriori. === One-attribute-rule === The '''one-attribute-rule''', or OneR, is an [[algorithm]] for finding [[association rules]]. According to Ross, very simple association rules, involving just one attribute in the condition part, often work well in practice with real-world data.<ref>{{Cite web | url=http://www.dcs.napier.ac.uk/~peter/vldb/dm/node8.html | title=OneR: the simplest method |last=Ross|first=Peter}}</ref>. The idea of the OneR (one-attribute-rule) algorithm is to find the one attribute to use to classify a novel datapoint that makes fewest prediction errors. For example, to classify a [[Automobile|car]] you haven't seen before, you might apply the following rule: ''If Fast Then Sportscar'', as opposed to a rule with multiple attributes in the condition: ''If Fast And Softtop And Red Then Sportscar''. The algorithm is as follows: <code> For each attribute A: For each value V of that attribute, create a rule: 1. count how often each class appears 2. find the most frequent class, c 3. make a rule "if A=V then C=c" Calculate the error rate of this rule Pick the attribute whose rules produce the lowest error rate </code> == Lore == A famous story about association rule mining is the "beer and diaper" story. A purported survey of behavior of supermarket shoppers discovered that customers (presumably young men) who buy diapers tend also to buy beer. This anecdote became popular as an example of how unexpected association rules might be found from everyday data. == Other types of Association Mining == '''Contrast set learning''' is a form of associative learning. '''Contrast set learners''' use rules that differ meaningfully in their distribution across subsets<ref name=busy>T. Menzies, Y. Hu, "Data Mining For Very Busy People." ''IEEE Computer'', October 2003, pgs. 18-25.</ref>. '''Weighted class learning''' is another form of associative learning in which weight may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results. '''[[K-optimal pattern discovery]]''' provides an alternative to the standard approach to association rule learning that requires that each pattern appear frequently in the data. '''Mining frequent sequences''' uses support to find sequences in temporal data<ref name=sequence>M. J. Zaki. (2001). SPADE: An Efficient Algorithm for Mining Frequent Sequences. Machine Learning Journal, 42, 31–60.</ref>. ==External links== ===Bibliographies=== * [http://michael.hahsler.net/research/bib/association_rules/ Annotated Bibliography on Association Rules] by M. Hahsler ===Implementations=== * [http://cran.r-project.org/package=arules arules], a package for mining association rules and frequent itemsets with [[R (programming language)|R]]. * [http://www.borgelt.net/fpm.html C. Boergelt's implementation of Apriori and Eclat] * [http://fimi.cs.helsinki.fi/ Frequent Itemset Mining Implementations Repository (FIMI)] * [http://adrem.ua.ac.be/~goethals/software/ Frequent pattern mining implementations from Bart Goethals] * [http://www.cs.waikato.ac.nz/ml/weka/ Weka], a collection of machine learning algorithms for data mining tasks written in [[Java (programming language)|Java]]. * [http://www.cs.rpi.edu/~zaki/software/ Data Mining Software by Mohammed J. Zaki] ==References== <references /> [[Category:Data management]] [[Category:Data mining]] [[de:Assoziationsanalyse]] [[es:Reglas de asociación]] [[pt:Regras de associação]]