Fp-growth算法python实现(数据挖掘学习笔记)

目录

1.算法伪代码

2.算法代码

3.测试数据

4.结果

1.算法伪代码

输入:

D:事务数据库。

min_sup:最小支持度阈值。

输出:

频繁模式的完全集。

方法:

1.按照以下步骤构造FP树:

(a)扫描事务数据库D一次。收集频繁项的集合F和他们的支持度。对F按照支持度计数降序排序,结果为频繁项集L。

(b)创建FP树的根节点,以”null”标记它。对于D中每一个事务trans,执行:

选择trans中的频繁项集,并且按照L中的次序排序。设trans排序后的频繁项集列表为[p|P],其中p是第一个元素,而P是剩余元素的列表。调用insert_tree([p|P],T)。该过程执行情况如下。如果T有子女N使得N.item-name = p.item-name,则N的计算加一;否则,创建一个新节点N,把它的计算设置为一,连接到它的父节点T,并且通过节点链结构把它连接到具有相同item-name的节点。如果P非空,则递归调用insert_tree(P,N)。

2.FP树的挖掘通过调用FP-growth(FP_tree,null)实现。过程如下:

Procedure FP_growth(tree,α)

(1)if tree包含单个路径P then;

(2)for路径P中结点的每个组合(记作β);

(3)产生模式β Uα,其支持度计数support_count等于β中的结点的最小支持度计数;

(4)else for Tree的头表中的每个ai {

(5)产生一个模式β = aiUα,其支持度计数support_count = ai,support_count;

(6)构造β的条件模式基,然后构造β的条件FP树Treeβ;

(7)If Treeβ ≠Øthen

(8)调用FP_growth(Treeβ,β);}

2.算法代码

-*- coding: utf-8 -*-
from tqdm import tqdm
import time
import psutil
import os

def load_data():  # 根据路径加载数据集
    ans = []  # 将数据保存到该数组
    reader = [
        ['M', 'O', 'N', 'K', 'E', 'Y'],
        ['D', 'O', 'N', 'K', 'E', 'Y'],
        ['M', 'A', 'K', 'E'],
        ['M', 'U', 'C', 'K', 'Y'],
        ['C', 'O', 'O', 'K', 'I', 'E']
    ]
    for row in reader:
        row = list(set(row))  # 去重,排序
        row.sort()
        ans.append(row)  # 将添加好的数据添加到数组
    return ans  # 返回处理好的数据集,为二维数组

def show_confidence(rule):
        index = 1
        for item in rule:
            s = " {:<4d} {:.3f} {}>{}".format(index, item[2], str(list(item[0])), str(list(item[1])))
            index += 1
            print(s)

class Node:
    def __init__(self, node_name, count, parentNode):
        self.name = node_name
        self.count = count
        self.nodeLink = None  # &#x6839;&#x636E;nideLink&#x53EF;&#x4EE5;&#x627E;&#x5230;&#x6574;&#x68F5;&#x6811;&#x4E2D;&#x6240;&#x6709;nodename&#x4E00;&#x6837;&#x7684;&#x8282;&#x70B9;
        self.parent = parentNode  # &#x7236;&#x4EB2;&#x8282;&#x70B9;
        self.children = {}  # &#x5B50;&#x8282;&#x70B9;{&#x8282;&#x70B9;&#x540D;&#x5B57;:&#x8282;&#x70B9;&#x5730;&#x5740;}

class Fp_growth_plus():

    def data_compress(self, data_set):
        data_dic = {}
        for i in data_set:
            if frozenset(i) not in data_dic:
                data_dic[frozenset(i)] = 1
            else:
                data_dic[frozenset(i)] += 1
        return data_dic

    def update_header(self, node, targetNode):  # &#x66F4;&#x65B0;headertable&#x4E2D;&#x7684;node&#x8282;&#x70B9;&#x5F62;&#x6210;&#x7684;&#x94FE;&#x8868;
        while node.nodeLink != None:
            node = node.nodeLink
        node.nodeLink = targetNode

    def update_fptree(self, items, count, node, headerTable):  # &#x7528;&#x4E8E;&#x66F4;&#x65B0;fptree
        if items[0] in node.children:
            # &#x5224;&#x65AD;items&#x7684;&#x7B2C;&#x4E00;&#x4E2A;&#x7ED3;&#x70B9;&#x662F;&#x5426;&#x5DF2;&#x4F5C;&#x4E3A;&#x5B50;&#x7ED3;&#x70B9;
            node.children[items[0]].count += count
        else:
            # &#x521B;&#x5EFA;&#x65B0;&#x7684;&#x5206;&#x652F;
            node.children[items[0]] = Node(items[0], count, node)
            # &#x66F4;&#x65B0;&#x76F8;&#x5E94;&#x9891;&#x7E41;&#x9879;&#x96C6;&#x7684;&#x94FE;&#x8868;&#xFF0C;&#x5F80;&#x540E;&#x6DFB;&#x52A0;
            if headerTable[items[0]][1] == None:
                headerTable[items[0]][1] = node.children[items[0]]
            else:
                self.update_header(headerTable[items[0]][1], node.children[items[0]])
        # &#x9012;&#x5F52;
        if len(items) > 1:
            self.update_fptree(items[1:], count, node.children[items[0]], headerTable)

    def create_fptree(self, data_dic, min_support, flag=False):  # &#x5EFA;&#x6811;&#x4E3B;&#x51FD;&#x6570;
        item_count = {}  # &#x7EDF;&#x8BA1;&#x5404;&#x9879;&#x51FA;&#x73B0;&#x6B21;&#x6570;
        for t in data_dic:  # &#x7B2C;&#x4E00;&#x6B21;&#x904D;&#x5386;&#xFF0C;&#x5F97;&#x5230;&#x9891;&#x7E41;&#x4E00;&#x9879;&#x96C6;
            for item in t:
                if item not in item_count:
                    item_count[item] = data_dic[t]
                else:
                    item_count[item] += data_dic[t]
        headerTable = {}
        for k in item_count:  # &#x5254;&#x9664;&#x4E0D;&#x6EE1;&#x8DB3;&#x6700;&#x5C0F;&#x652F;&#x6301;&#x5EA6;&#x7684;&#x9879;
            if item_count[k] >= min_support:
                headerTable[k] = item_count[k]

        freqItemSet = set(headerTable.keys())  # &#x6EE1;&#x8DB3;&#x6700;&#x5C0F;&#x652F;&#x6301;&#x5EA6;&#x7684;&#x9891;&#x7E41;&#x9879;&#x96C6;
        if len(freqItemSet) == 0:
            return None, None
        for k in headerTable:
            headerTable[k] = [headerTable[k], None]  # element: [count, node]
        tree_header = Node('head node', 1, None)
        if flag:
            ite = tqdm(data_dic)
        else:
            ite = data_dic
        for t in ite:  # &#x7B2C;&#x4E8C;&#x6B21;&#x904D;&#x5386;&#xFF0C;&#x5EFA;&#x6811;
            localD = {}
            for item in t:
                if item in freqItemSet:  # &#x8FC7;&#x6EE4;&#xFF0C;&#x53EA;&#x53D6;&#x8BE5;&#x6837;&#x672C;&#x4E2D;&#x6EE1;&#x8DB3;&#x6700;&#x5C0F;&#x652F;&#x6301;&#x5EA6;&#x7684;&#x9891;&#x7E41;&#x9879;
                    localD[item] = headerTable[item][0]  # element : count
            if len(localD) > 0:
                # &#x6839;&#x636E;&#x5168;&#x5C40;&#x9891;&#x6570;&#x4ECE;&#x5927;&#x5230;&#x5C0F;&#x5BF9;&#x5355;&#x6837;&#x672C;&#x6392;&#x5E8F;
                order_item = [v[0] for v in sorted(localD.items(), key=lambda x: x[1], reverse=True)]
                # &#x7528;&#x8FC7;&#x6EE4;&#x4E14;&#x6392;&#x5E8F;&#x540E;&#x7684;&#x6837;&#x672C;&#x66F4;&#x65B0;&#x6811;
                self.update_fptree(order_item, data_dic[t], tree_header, headerTable)
        return tree_header, headerTable

    def find_path(self, node, nodepath):
        '''
        &#x9012;&#x5F52;&#x5C06;node&#x7684;&#x7236;&#x8282;&#x70B9;&#x6DFB;&#x52A0;&#x5230;&#x8DEF;&#x5F84;
        '''
        if node.parent != None:
            nodepath.append(node.parent.name)
            self.find_path(node.parent, nodepath)

    def find_cond_pattern_base(self, node_name, headerTable):
        '''
        &#x6839;&#x636E;&#x8282;&#x70B9;&#x540D;&#x5B57;&#xFF0C;&#x627E;&#x51FA;&#x6240;&#x6709;&#x6761;&#x4EF6;&#x6A21;&#x5F0F;&#x57FA;
        '''
        treeNode = headerTable[node_name][1]
        cond_pat_base = {}  # &#x4FDD;&#x5B58;&#x6240;&#x6709;&#x6761;&#x4EF6;&#x6A21;&#x5F0F;&#x57FA;
        while treeNode != None:
            nodepath = []
            self.find_path(treeNode, nodepath)
            if len(nodepath) > 1:
                cond_pat_base[frozenset(nodepath[:-1])] = treeNode.count
            treeNode = treeNode.nodeLink
        return cond_pat_base

    def create_cond_fptree(self, headerTable, min_support, temp, freq_items, support_data):
        # &#x6700;&#x5F00;&#x59CB;&#x7684;&#x9891;&#x7E41;&#x9879;&#x96C6;&#x662F;headerTable&#x4E2D;&#x7684;&#x5404;&#x5143;&#x7D20;
        freqs = [v[0] for v in sorted(headerTable.items(), key=lambda p: p[1][0])]  # &#x6839;&#x636E;&#x9891;&#x7E41;&#x9879;&#x7684;&#x603B;&#x9891;&#x6B21;&#x6392;&#x5E8F;
        for freq in freqs:  # &#x5BF9;&#x6BCF;&#x4E2A;&#x9891;&#x7E41;&#x9879;
            freq_set = temp.copy()
            freq_set.add(freq)
            freq_items.add(frozenset(freq_set))
            if frozenset(freq_set) not in support_data:  # &#x68C0;&#x67E5;&#x8BE5;&#x9891;&#x7E41;&#x9879;&#x662F;&#x5426;&#x5728;support_data&#x4E2D;
                support_data[frozenset(freq_set)] = headerTable[freq][0]
            else:
                support_data[frozenset(freq_set)] += headerTable[freq][0]

            cond_pat_base = self.find_cond_pattern_base(freq, headerTable)  # &#x5BFB;&#x627E;&#x5230;&#x6240;&#x6709;&#x6761;&#x4EF6;&#x6A21;&#x5F0F;&#x57FA;
            # &#x521B;&#x5EFA;&#x6761;&#x4EF6;&#x6A21;&#x5F0F;&#x6811;
            cond_tree, cur_headtable = self.create_fptree(cond_pat_base, min_support)
            if cur_headtable != None:
                self.create_cond_fptree(cur_headtable, min_support, freq_set, freq_items, support_data)  # &#x9012;&#x5F52;&#x6316;&#x6398;&#x6761;&#x4EF6;FP&#x6811;

    def generate_L(self, data_set, min_support):
        data_dic = self.data_compress(data_set)
        freqItemSet = set()
        support_data = {}
        tree_header, headerTable = self.create_fptree(data_dic, min_support, flag=True)  # &#x521B;&#x5EFA;&#x6570;&#x636E;&#x96C6;&#x7684;fptree
        # &#x521B;&#x5EFA;&#x5404;&#x9891;&#x7E41;&#x4E00;&#x9879;&#x7684;fptree&#xFF0C;&#x5E76;&#x6316;&#x6398;&#x9891;&#x7E41;&#x9879;&#x5E76;&#x4FDD;&#x5B58;&#x652F;&#x6301;&#x5EA6;&#x8BA1;&#x6570;
        self.create_cond_fptree(headerTable, min_support, set(), freqItemSet, support_data)

        max_l = 0
        for i in freqItemSet:  # &#x5C06;&#x9891;&#x7E41;&#x9879;&#x6839;&#x636E;&#x5927;&#x5C0F;&#x4FDD;&#x5B58;&#x5230;&#x6307;&#x5B9A;&#x7684;&#x5BB9;&#x5668;L&#x4E2D;
            if len(i) > max_l: max_l = len(i)
        L = [set() for _ in range(max_l)]
        for i in freqItemSet:
            L[len(i) - 1].add(i)
        for i in range(len(L)):
            print("frequent item {}:{}".format(i + 1, L[i]))
        return L, support_data

    def generate_R(self, data_set, min_support, min_conf):
        L, support_data = self.generate_L(data_set, min_support)
        rule_list = []
        sub_set_list = []
        for i in range(0, len(L)):
            for freq_set in L[i]:
                for sub_set in sub_set_list:
                    if sub_set.issubset(
                            freq_set) and freq_set - sub_set in support_data:  # and freq_set-sub_set in support_data
                        conf = support_data[freq_set] / support_data[freq_set - sub_set]
                        big_rule = (freq_set - sub_set, sub_set, conf)
                        if conf >= min_conf and big_rule not in rule_list:
                            rule_list.append(big_rule)
                sub_set_list.append(freq_set)
        rule_list = sorted(rule_list, key=lambda x: (x[2]), reverse=True)
        return rule_list

if __name__ == "__main__":
    begin_time = time.time()
    min_support = 3  # &#x6700;&#x5C0F;&#x652F;&#x6301;&#x5EA6;
    min_conf = 0.8  # &#x6700;&#x5C0F;&#x7F6E;&#x4FE1;&#x5EA6;
    data_set = load_data()
    print(data_set)
    fp = Fp_growth_plus()
    rule_list = fp.generate_R(data_set, min_support, min_conf)
    print("confidence:")
    show_confidence(rule_list)
    end_time = time.time()
    timeget = end_time - begin_time
    print("&#x7A0B;&#x5E8F;&#x5F00;&#x59CB;&#x65F6;&#x95F4;:",begin_time)
    print("&#x7A0B;&#x5E8F;&#x7ED3;&#x675F;&#x65F6;&#x95F4;&#xFF1A;",end_time)
    print("&#x7A0B;&#x5E8F;&#x82B1;&#x8D39;&#x65F6;&#x95F4;&#xFF1A;",timeget)
    print(u'&#x5F53;&#x524D;&#x8FDB;&#x7A0B;&#x7684;&#x5185;&#x5B58;&#x4F7F;&#x7528;&#xFF1A;%.4f GB' % (psutil.Process(os.getpid()).memory_info().rss / 1024 / 1024 / 1024))</4d}>

注:代码来源已找不到!

3.测试数据

[‘M’, ‘O’, ‘N’, ‘K’, ‘E’, ‘Y’],
[‘D’, ‘O’, ‘N’, ‘K’, ‘E’, ‘Y’],
[‘M’, ‘A’, ‘K’, ‘E’],
[‘M’, ‘U’, ‘C’, ‘K’, ‘Y’],
[‘C’, ‘O’, ‘O’, ‘K’, ‘I’, ‘E’]

4.结果

Fp-growth算法python实现(数据挖掘学习笔记)

各频繁集

E/K/M/O/Y

KE/OE/KM/OK/YK/OKE

各强关联规则及其置信度

K –> E: 0.8

E –> K: 1.0

O –> E: 1.0

M –> K: 1.0

O –> K: 1.0

Y –> K: 1.0

O–> E K: 1.0

程序开始时间: 1655193050.1952014
程序结束时间: 1655193050.2762582
程序花费时间: 0.08105683326721191
当前进程的内存使用:0.0169 GB

基础知识参考:

机器学习实战(十一)FP-growth算法_Qxw1012的博客-CSDN博客_fpgrowth算法步骤

FP-growth算法,fpgrowth算法详解_javastart的博客-CSDN博客_fpgrowth算法

FP-growth算法理解和实现_木百栢的博客-CSDN博客_fp growth

本文仅仅是个人学习笔记,如有错误,敬请指正!

Original: https://blog.csdn.net/qq_55906442/article/details/124684811
Author: 一个人的牛牛
Title: Fp-growth算法python实现(数据挖掘学习笔记)

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/638407/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球