基于回归模型的协同过滤 (随机梯度下降+交替最小二乘优化)

将评分看作是一个连续的值而不是离散的值,就可以借助线性回归思想来预测目标用户对某物品的评分。其中一种实现策略被称为Baseline(基准预测)。

1. Baseline:基准预测

Baseline设计思想基于以下的假设:

  • 有些用户的评分普遍高于其他用户,有些用户的评分普遍低于其他用户。比如有些用户天生愿意给别人好评,心慈手软,比较好说话,而有的人就比较苛刻,总是评分不超过3分(5分满分);
  • 一些物品的评分普遍高于其他物品,一些物品的评分普遍低于其他物品。比如一些物品一被生产便决定了它的地位,有的比较受人们欢迎,有的则被人嫌弃。

这个用户或物品普遍高于或低于平均值的差值,我们称为偏置(bias)。

2. Baseline目标:

  • 找出每个用户普遍高于或低于他人的偏置值b u b_u b u ​
  • 找出每件物品普遍高于或低于其他物品的偏置值b i b_i b i ​
  • 目标转化为寻找最优的b u b_u b u ​和b i b_i b i ​

使用Baseline的算法思想预测评分的步骤如下:

  • 计算所有电影的平均评分μ \mu μ(即全局平均评分)
  • 计算每个用户评分与平均评分μ \mu μ的偏置值b u b_u b u ​
  • 计算每部电影所接受的评分与平均评分μ \mu μ的偏置值b i b_i b i ​
  • 预测用户对电影的评分:
    r ^ u i = b u i = μ + b u + b i \hat{r}{ui} = b{ui} = \mu + b_u + b_i r ^u i ​=b u i ​=μ+b u ​+b i ​

举例:

​ 比如想通过Baseline来预测用户A对电影”阿甘正传”的评分,那么首先计算出整个评分数据集的平均评分μ \mu μ是3.5分;而用户A是一个比较苛刻的用户,他的评分比较严格,普遍比平均评分低0.5分,即用户A的偏置值b i b_i b i ​是-0.5;而电影”阿甘正传”是一部比较热门而且备受好评的电影,它的评分普遍比平均评分要高1.2分,那么电影”阿甘正传”的偏置值b i b_i b i ​是+1.2,因此就可以预测出用户A对电影”阿甘正传”的评分为:3.5 + ( − 0.5 ) + 1.2 3.5+(-0.5)+1.2 3 .5 +(−0 .5 )+1 .2,也就是4.2分。

​ 对于所有电影的平均评分μ \mu μ是直接能计算出的,因此问题在于要测出每个用户的b u b_u b u ​值和每部电影的b i b_i b i ​的值。对于线性回归问题,我们可以利用平方差构建损失函数如下:
C o s t = ∑ u , i ∈ R ( r u i − r ^ u i ) 2 = ∑ u , i ∈ R ( r u i − μ − b u − b i ) 2 Cost =\sum_{u,i\in R}(r_{ui}-\hat{r}{ui})^2=\sum{u,i\in R}(r_{ui}-\mu-b_u-b_i)^2 C o s t =u ,i ∈R ∑​(r u i ​−r ^u i ​)2 =u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)2
加入L2正则化:
C o s t = ∑ u , i ∈ R ( r u i − μ − b u − b i ) 2 + λ ∗ ( ∑ u b u 2 + ∑ i b i 2 ) Cost=\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i)^2 + \lambda*(\sum_u {b_u}^2 + \sum_i {b_i}^2)C o s t =u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)2 +λ∗(u ∑​b u ​2 +i ∑​b i ​2 )
公式解析:

  • 公式第一部分∑ u , i ∈ R ( r u i − μ − b u − b i ) 2 \sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i)^2 ∑u ,i ∈R ​(r u i ​−μ−b u ​−b i ​)2是用来寻找与已知评分数据拟合最好的b u b_u b u ​和b i b_i b i ​
  • 公式第二部分λ ∗ ( ∑ u b u 2 + ∑ i b i 2 ) \lambda*(\sum_u {b_u}^2 + \sum_i {b_i}^2)λ∗(∑u ​b u ​2 +∑i ​b i ​2 )是正则化项,用于避免过拟合现象

对于最小过程的求解,我们一般采用 随机梯度下降法或者 交替最小二乘法来优化实现。

3. 优化方法

使用随机梯度下降优化算法预测Baseline偏置值

损失函数:
J ( θ ) = C o s t = f ( b u , b i ) J ( θ ) = ∑ u , i ∈ R ( r u i − μ − b u − b i ) 2 + λ ∗ ( ∑ u b u 2 + ∑ i b i 2 ) \begin{aligned} &J(\theta)=Cost=f(b_u, b_i)\ \ &J(\theta)=\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i)^2 + \lambda*(\sum_u {b_u}^2 + \sum_i {b_i}^2) \end{aligned}​J (θ)=C o s t =f (b u ​,b i ​)J (θ)=u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)2 +λ∗(u ∑​b u ​2 +i ∑​b i ​2 )​
梯度下降参数更新原始公式:
θ j : = θ j − α ∂ ∂ θ j J ( θ ) \theta_j:=\theta_j-\alpha\cfrac{\partial }{\partial \theta_j}J(\theta)θj ​:=θj ​−α∂θj ​∂​J (θ)
梯度下降更新b u b_u b u ​:

损失函数偏导推导:
∂ ∂ b u J ( θ ) = ∂ ∂ b u f ( b u , b i ) = 2 ∑ u , i ∈ R ( r u i − μ − b u − b i ) ( − 1 ) + 2 λ b u = − 2 ∑ u , i ∈ R ( r u i − μ − b u − b i ) + 2 λ ∗ b u \begin{aligned} \cfrac{\partial}{\partial b_u} J(\theta)&=\cfrac{\partial}{\partial b_u} f(b_u, b_i)\ & =2\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i)(-1) + 2\lambda{b_u} \ & =-2\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i) + 2\lambdab_u \end{aligned}∂b u ​∂​J (θ)​=∂b u ​∂​f (b u ​,b i ​)=2 u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)(−1 )+2 λb u ​=−2 u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)+2 λ∗b u ​​
​ b u b_u b u ​更新(因为alpha可以人为控制,所以2可以省略掉):
b u : = b u − α ∗ ( − ∑ u , i ∈ R ( r u i − μ − b u − b i ) + λ ∗ b u ) : = b u + α ∗ ( ∑ u , i ∈ R ( r u i − μ − b u − b i ) − λ ∗ b u ) \begin{aligned} b_u&:=b_u – \alpha
(-\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i) + \lambda * b_u)\ &:=b_u + \alpha(\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i) – \lambda b_u) \end{aligned}b u ​​:=b u ​−α∗(−u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)+λ∗b u ​):=b u ​+α∗(u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)−λ∗b u ​)​
同理可得,梯度下降更新b i b_i b i ​:
b i : = b i + α ∗ ( ∑ u , i ∈ R ( r u i − μ − b u − b i ) − λ ∗ b i ) b_i:=b_i + \alpha(\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i) -\lambdab_i)b i ​:=b i ​+α∗(u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)−λ∗b i ​)

由于 随机梯度下降法本质上利用 每个样本的损失来更新参数,而不用每次求出全部的损失和,因此使用SGD时:

单样本损失值:
e r r o r = r u i − r ^ u i = r u i − ( μ + b u + b i ) = r u i − μ − b u − b i \begin{aligned} error &=r_{ui}-\hat{r}{ui} \&= r{ui}-(\mu+b_u+b_i) \&= r_{ui}-\mu-b_u-b_i \end{aligned}e r r o r ​=r u i ​−r ^u i ​=r u i ​−(μ+b u ​+b i ​)=r u i ​−μ−b u ​−b i ​​
参数更新:
b u : = b u + α ∗ ( ( r u i − μ − b u − b i ) − λ ∗ b u ) : = b u + α ∗ ( e r r o r − λ ∗ b u ) b i : = b i + α ∗ ( ( r u i − μ − b u − b i ) − λ ∗ b i ) : = b i + α ∗ ( e r r o r − λ ∗ b i ) \begin{aligned} b_u&:=b_u + \alpha((r_{ui}-\mu-b_u-b_i) -\lambdab_u) \ &:=b_u + \alpha(error – \lambdab_u) \ \ b_i&:=b_i + \alpha((r_{ui}-\mu-b_u-b_i) -\lambdab_i)\ &:=b_i + \alpha(error -\lambdab_i) \end{aligned}b u ​b i ​​:=b u ​+α∗((r u i ​−μ−b u ​−b i ​)−λ∗b u ​):=b u ​+α∗(e r r o r −λ∗b u ​):=b i ​+α∗((r u i ​−μ−b u ​−b i ​)−λ∗b i ​):=b i ​+α∗(e r r o r −λ∗b i ​)​
数据集链接:MovieLens Latest Datasets Small

import pandas as pd
import numpy as np

class BaselineCFBySGD(object):
    def __init__(self, number_epochs, alpha, reg, columns=None):
"""
        :param number_epochs: 梯度下降最高迭代次数
        :param alpha: 学习率
        :param reg: 正则参数
        :param columns: 数据集中user-item-rating字段的名称
"""
        if columns is None:
            columns = ["uid", "iid", "rating"]
        self.number_epochs = number_epochs
        self.alpha = alpha
        self.reg = reg
        self.columns = columns

    def fit(self, dataset):
"""
        :param dataset: 用户评分数据
        :return:
"""
        self.dataset = dataset

        self.users_ratings = dataset.groupby(self.columns[0]).agg([list])[[self.columns[1], self.columns[2]]]

        self.items_ratings = dataset.groupby(self.columns[1]).agg([list])[[self.columns[0], self.columns[2]]]

        self.global_mean = self.dataset[self.columns[2]].mean()

        self.bu, self.bi = self.sgd()

    def sgd(self):
"""
        利用随机梯度下降,优化bu,bi的值
        :return: bu, bi
"""

        bu = dict(zip(self.users_ratings.index, np.zeros(len(self.users_ratings))))
        bi = dict(zip(self.items_ratings.index, np.zeros(len(self.items_ratings))))
        for i in range(self.number_epochs):
            print("iter%d 开始:" % i)
            for uid, iid, real_rating in self.dataset.itertuples(index=False):
                error = real_rating - (self.global_mean + bu[uid] + bi[iid])
                bu[uid] += self.alpha * (error - self.reg * bu[uid])
                bi[iid] += self.alpha * (error - self.reg * bi[iid])
        return bu, bi

    def predict(self, uid, iid):
        predict_rating = self.global_mean + self.bu[uid] + self.bi[iid]
        return predict_rating

if __name__ == '__main__':
    dtype = [("userId", np.int32), ("moiveId", np.int32), ("rating", np.float32)]
    dataset = pd.read_csv("ratings.csv", usecols=range(3), dtype=dtype)
    bcf = BaselineCFBySGD(10, 0.1, 0.1, columns=["userId", "movieId", "rating"])
    bcf.fit(dataset)
    print(bcf.predict(1, 1))
  • 添加test方法,然后使用之前实现accuary方法计算准确性指标
import pandas as pd
import numpy as np

def data_split(data_path, x=0.8, random=False):
"""
    切分数据集,为了保证用户数量保持不变,将每个用户的评分数据按比例进行拆分
    :param data_path: 数据集路径
    :param x: 训练集的比例,如x=0.8,则0.2是测试集
    :param random: 是否随机切分
    :return: 用户-物品评分矩阵
"""
    print("开始切分数据集...")

    dtype = {'userId': np.int32, 'movieId': np.int32, 'rating': np.float32}

    ratings = pd.read_csv(data_path, dtype=dtype, usecols=range(3))

    testset_index = []

    for uid in ratings.groupby('userId').any().index:
        user_rating_data = ratings.where(ratings['userId'] == uid).dropna()
        if random:
            index = list(user_rating_data.index)
            np.random.shuffle(index)
            _index = round(len(user_rating_data) * x)
            testset_index += list(index[_index:])
        else:
            index = round(len(user_rating_data) * x)
            testset_index += list(user_rating_data.index.values[index:])
    testset = ratings.loc[testset_index]
    trainset = ratings.drop(testset_index)
    print("完成数据集切分...")
    return trainset, testset

def accuracy(predict_results, method="all"):
"""
    准确性指标
    :param predict_results: 预测结果,类型为容器,每个元素是一个包含uid,iid,real_rating,pred_rating的序列
    :param method: 指标方法,类型为字符串,rmse或mae,否则返回两者rmse和mae
    :return: 指标值
"""

    def rmse(predict_results):
        length = 0
        _rmse_sum = 0
        for uid, iid, real_rating, pred_rating in predict_results:
            length += 1
            _rmse_sum += (pred_rating - real_rating) ** 2
        return round(np.sqrt(_rmse_sum / length), 4)

    def mae(predict_results):
        length = 0
        _mae_sum = 0
        for uid, iid, real_rating, pred_rating in predict_results:
            length += 1
            _mae_sum += abs(pred_rating - real_rating)
        return round(_mae_sum / length, 4)

    def rmse_mae(predict_results):
        length = 0
        _rmse_sum = 0
        _mae_sum = 0
        for uid, iid, real_rating, pred_rating in predict_results:
            length += 1
            _rmse_sum += (pred_rating - real_rating) ** 2
            _mae_sum += abs(pred_rating - real_rating)
        return round(np.sqrt(_rmse_sum / length), 4), round(_mae_sum / length, 4)

    if method.lower() == "rmse":
        rmse(predict_results)
    elif method.lower() == "mae":
        mae(predict_results)
    else:
        return rmse_mae(predict_results)

class BaselineCFBySGD(object):
    def __init__(self, number_epochs, alpha, reg, columns=None):
"""
        :param number_epochs: 梯度下降最高迭代次数
        :param alpha: 学习率
        :param reg: 正则参数
        :param columns: 数据集中user-item-rating字段的名称
"""
        if columns is None:
            columns = ["uid", "iid", "rating"]
        self.number_epochs = number_epochs
        self.alpha = alpha
        self.reg = reg
        self.columns = columns

    def fit(self, dataset):
"""
        :param dataset: 用户评分数据
        :return:
"""
        self.dataset = dataset

        self.users_ratings = dataset.groupby(self.columns[0]).agg([list])[[self.columns[1], self.columns[2]]]

        self.items_ratings = dataset.groupby(self.columns[1]).agg([list])[[self.columns[0], self.columns[2]]]

        self.global_mean = self.dataset[self.columns[2]].mean()

        self.bu, self.bi = self.sgd()

    def sgd(self):
"""
        利用随机梯度下降,优化bu,bi的值
        :return: bu, bi
"""

        bu = dict(zip(self.users_ratings.index, np.zeros(len(self.users_ratings))))
        bi = dict(zip(self.items_ratings.index, np.zeros(len(self.items_ratings))))
        for i in range(self.number_epochs):
            print("iter%d 开始:" % i)
            for uid, iid, real_rating in self.dataset.itertuples(index=False):
                error = real_rating - (self.global_mean + bu[uid] + bi[iid])
                bu[uid] += self.alpha * (error - self.reg * bu[uid])
                bi[iid] += self.alpha * (error - self.reg * bi[iid])
        return bu, bi

    def predict(self, uid, iid):

        if iid not in self.items_ratings.index:
            raise Exception("无法预测用户对电影的评分,因为训练集中缺失的数据".format(uid=uid, iid=iid))
        predict_rating = self.global_mean + self.bu[uid] + self.bi[iid]
        return predict_rating

    def test(self, testset):

        for uid, iid, real_rating in testset.itertuples(index=False):
            try:
                pred_rating = self.predict(uid, iid)
            except Exception as e:
                print(e)
            else:
                yield uid, iid, real_rating, pred_rating

if __name__ == '__main__':
    trainset, testset = data_split("ratings.csv", random=True)
    bcf = BaselineCFBySGD(20, 0.1, 0.1, ["userId", "movieId", "rating"])
    bcf.fit(trainset)
    pred_results = bcf.test(testset)
    rmse, mae = accuracy(pred_results)
    print("rmse: ", rmse, "mae: ", mae)

使用交替最小二乘法优化算法预测Baseline偏置值

最小二乘法和梯度下降法一样,可以用于求极值。

最小二乘法思想:对损失函数求偏导,然后再使偏导为0

同样,损失函数:
J ( θ ) = ∑ u , i ∈ R ( r u i − μ − b u − b i ) 2 + λ ∗ ( ∑ u b u 2 + ∑ i b i 2 ) J(\theta)=\sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i)^2 + \lambda(\sum_u {b_u}^2 + \sum_i {b_i}^2)J (θ)=u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)2 +λ∗(u ∑​b u ​2 +i ∑​b i ​2 )
对损失函数求偏导:
∂ ∂ b u f ( b u , b i ) = − 2 ∑ u , i ∈ R ( r u i − μ − b u − b i ) + 2 λ ∗ b u \cfrac{\partial}{\partial b_u} f(b_u, b_i) =-2 \sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i) + 2\lambda * b_u ∂b u ​∂​f (b u ​,b i ​)=−2 u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)+2 λ∗b u ​
令偏导为0,则可得:
∑ u , i ∈ R ( r u i − μ − b u − b i ) = λ ∗ b u ∑ u , i ∈ R ( r u i − μ − b i ) = ∑ u , i ∈ R b u + λ ∗ b u \sum_{u,i\in R}(r_{ui}-\mu-b_u-b_i) = \lambda
b_u \\sum_{u,i\in R}(r_{ui}-\mu-b_i) = \sum_{u,i\in R} b_u+\lambda * b_u u ,i ∈R ∑​(r u i ​−μ−b u ​−b i ​)=λ∗b u ​u ,i ∈R ∑​(r u i ​−μ−b i ​)=u ,i ∈R ∑​b u ​+λ∗b u ​
为了简化公式,这里令∑ u , i ∈ R b u ≈ ∣ R ( u ) ∣ ∗ b u \sum_{u,i\in R} b_u \approx |R(u)|*b_u ∑u ,i ∈R ​b u ​≈∣R (u )∣∗b u ​,即直接假设每一项的偏置都相等,可得:
b u : = ∑ u , i ∈ R ( r u i − μ − b i ) λ 1 + ∣ R ( u ) ∣ b_u := \cfrac {\sum_{u,i\in R}(r_{ui}-\mu-b_i)}{\lambda_1 + |R(u)|}b u ​:=λ1 ​+∣R (u )∣∑u ,i ∈R ​(r u i ​−μ−b i ​)​
其中∣ R ( u ) ∣ |R(u)|∣R (u )∣表示用户u u u的有过评分数量

同理可得:
b i : = ∑ u , i ∈ R ( r u i − μ − b u ) λ 2 + ∣ R ( i ) ∣ b_i := \cfrac {\sum_{u,i\in R}(r_{ui}-\mu-b_u)}{\lambda_2 + |R(i)|}b i ​:=λ2 ​+∣R (i )∣∑u ,i ∈R ​(r u i ​−μ−b u ​)​
其中∣ R ( i ) ∣ |R(i)|∣R (i )∣表示物品i i i收到的评分数量

b u b_u b u ​和b i b_i b i ​分别属于用户和物品的偏置,因此他们的正则参数可以分别设置两个独立的参数

通过最小二乘推导,我们最终分别得到了b u b_u b u ​和b i b_i b i ​的表达式,但他们的表达式中却又各自包含对方,因此这里我们将利用一种叫交替最小二乘的方法来计算他们的值:

  • 计算其中一项,先固定其他未知参数,即看作其他未知参数为已知
  • 如求b u b_u b u ​时,将b i b_i b i ​看作是已知;求b i b_i b i ​时,将b u b_u b u ​看作是已知;如此反复交替,不断更新二者的值,求得最终的结果。这就是 *交替最小二乘法(ALS)
import pandas as pd
import numpy as np

def data_split(data_path, x=0.8, random=False):
"""
    切分数据集,为了保证用户数量保持不变,将每个用户的评分数据按比例进行拆分
    :param data_path: 数据集路径
    :param x: 训练集的比例,如x=0.8,则0.2是测试集
    :param random: 是否随机切分
    :return: 用户-物品评分矩阵
"""
    print("开始切分数据集...")

    dtype = {'userId': np.int32, 'movieId': np.int32, 'rating': np.float32}

    ratings = pd.read_csv(data_path, dtype=dtype, usecols=range(3))

    testset_index = []

    for uid in ratings.groupby('userId').any().index:
        user_rating_data = ratings.where(ratings['userId'] == uid).dropna()
        if random:
            index = list(user_rating_data.index)
            np.random.shuffle(index)
            _index = round(len(user_rating_data) * x)
            testset_index += list(index[_index:])
        else:
            index = round(len(user_rating_data) * x)
            testset_index += list(user_rating_data.index.values[index:])
    testset = ratings.loc[testset_index]
    trainset = ratings.drop(testset_index)
    print("完成数据集切分...")
    return trainset, testset

def accuracy(predict_results, method="all"):
"""
    准确性指标
    :param predict_results: 预测结果,类型为容器,每个元素是一个包含uid,iid,real_rating,pred_rating的序列
    :param method: 指标方法,类型为字符串,rmse或mae,否则返回两者rmse和mae
    :return: 指标值
"""

    def rmse(predict_results):
        length = 0
        _rmse_sum = 0
        for uid, iid, real_rating, pred_rating in predict_results:
            length += 1
            _rmse_sum += (pred_rating - real_rating) ** 2
        return round(np.sqrt(_rmse_sum / length), 4)

    def mae(predict_results):
        length = 0
        _mae_sum = 0
        for uid, iid, real_rating, pred_rating in predict_results:
            length += 1
            _mae_sum += abs(pred_rating - real_rating)
        return round(_mae_sum / length, 4)

    def rmse_mae(predict_results):
        length = 0
        _rmse_sum = 0
        _mae_sum = 0
        for uid, iid, real_rating, pred_rating in predict_results:
            length += 1
            _rmse_sum += (pred_rating - real_rating) ** 2
            _mae_sum += abs(pred_rating - real_rating)
        return round(np.sqrt(_rmse_sum / length), 4), round(_mae_sum / length, 4)

    if method.lower() == "rmse":
        rmse(predict_results)
    elif method.lower() == "mae":
        mae(predict_results)
    else:
        return rmse_mae(predict_results)

class BaselineCFBySGD(object):
    def __init__(self, number_epochs, reg_bu, reg_bi, columns=None):
"""
        :param number_epochs: 梯度下降最高迭代次数
        :param reg_bu: 用户偏置参数
        :param reg_bi: 物品偏置参数
        :param columns: 数据集中user-item-rating字段的名称
"""

        if columns is None:
            columns = ["uid", "iid", "rating"]
        self.number_epochs = number_epochs
        self.reg_bu = reg_bu
        self.reg_bi = reg_bi
        self.columns = columns

    def fit(self, dataset):
"""
        :param dataset: 用户评分数据
        :return:
"""
        self.dataset = dataset

        self.users_ratings = dataset.groupby(self.columns[0]).agg([list])[[self.columns[1], self.columns[2]]]

        self.items_ratings = dataset.groupby(self.columns[1]).agg([list])[[self.columns[0], self.columns[2]]]

        self.global_mean = self.dataset[self.columns[2]].mean()

        self.bu, self.bi = self.als()

    def als(self):
"""
        利用交替最小二乘法,优化bu, bi的值
        :return: bu, bi
"""
        bu = dict(zip(self.users_ratings.index, np.zeros(len(self.users_ratings))))
        bi = dict(zip(self.items_ratings.index, np.zeros(len(self.items_ratings))))
        for i in range(self.number_epochs):
            print("iter%d" % i)
            for iid, uids, ratings in self.items_ratings.itertuples(index=True):
                _sum = 0
                for uid, rating in zip(uids, ratings):
                    _sum += rating - self.global_mean - bu[uid]
                bi[iid] = _sum / (self.reg_bi + len(uids))
            for uid, iids, ratings in self.users_ratings.itertuples(index=True):
                _sum = 0
                for iid, rating in zip(iids, ratings):
                    _sum += rating - self.global_mean - bi[iid]
                bu[uid] = _sum / (self.reg_bu + len(iids))
        return bu, bi

    def predict(self, uid, iid):

        if iid not in self.items_ratings.index:
            raise Exception("无法预测用户对电影的评分,因为训练集中缺失的数据".format(uid=uid, iid=iid))
        predict_rating = self.global_mean + self.bu[uid] + self.bi[iid]
        return predict_rating

    def test(self, testset):

        for uid, iid, real_rating in testset.itertuples(index=False):
            try:
                pred_rating = self.predict(uid, iid)
            except Exception as e:
                print(e)
            else:
                yield uid, iid, real_rating, pred_rating

if __name__ == '__main__':
    trainset, testset = data_split("ratings.csv", random=True)
    bcf = BaselineCFBySGD(20, 0.1, 0.1, ["userId", "movieId", "rating"])
    bcf.fit(trainset)
    pred_results = bcf.test(testset)
    rmse, mae = accuracy(pred_results)
    print("rmse: ", rmse, "mae: ", mae)

Original: https://blog.csdn.net/weixin_44936816/article/details/122783226
Author: lavineeeen
Title: 基于回归模型的协同过滤 (随机梯度下降+交替最小二乘优化)

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/699172/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球