一致性检验评价方法kappa

最近在做眼底图像的无监督分类,使用的数据集辣子kaggle的Diabetic Retinopathy,简称DR,中文称糖尿病型眼底疾病。

最后的评估方法是二次加权kappa。以前没接触过,网上也没有具体的介绍,在这里简单谈谈我的理解,如有错误欢迎指出。

简介

Kappa指数用来衡量两个模型对同一张图片进行判断时,判断结果一致的程度,结果范围从0~1,1表示评价完全相同,0表示评价完全相反。

一般用模型获得相同评价的数量与基于可能性的期望是否有差别来分析,当两个模型相同评价的数量和基于可能性期望的数量基本一样时,kappa的值就接近于1。

举个栗子,模型A和基准的kappa:

一致性检验评价方法kappa

kappa = (p0-pe) / (n-pe)

其中,P0 = 对角线单元中观测值的总和;pe = 对角线单元中期望值的总和。

根据kappa的计算方法分为简单kappa(simple kappa)和加权kappa(weighted kappa),加权kappa又分为 linear weighted kappaquadratic weighted kappa。

weighted kappa

关于linear还是quadratic weighted kappa的选择,取决于你的数据集中不同class之间差异的意义。比如对于眼底图像识别的数据,class=0为健康,class=4为疾病晚期非常严重,所以对于把class=0预测成4的行为所造成的惩罚应该远远大于把class=0预测成class=1的行为,使用quadratic的话0->4所造成的惩罚就等于16倍的0->1的惩罚。如下图是一个四分类的两个计算方法的比较。

一致性检验评价方法kappa

Python实现

参考:https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/quadratic_weighted_kappa.py

#! /usr/bin/env python2.7

import numpy as np

def confusion_matrix(rater_a, rater_b, min_rating=None, max_rating=None):
"""
    Returns the confusion matrix between rater's ratings
"""
    assert(len(rater_a) == len(rater_b))
    if min_rating is None:
        min_rating = min(rater_a + rater_b)
    if max_rating is None:
        max_rating = max(rater_a + rater_b)
    num_ratings = int(max_rating - min_rating + 1)
    conf_mat = [[0 for i in range(num_ratings)]
                for j in range(num_ratings)]
    for a, b in zip(rater_a, rater_b):
        conf_mat[a - min_rating][b - min_rating] += 1
    return conf_mat

def histogram(ratings, min_rating=None, max_rating=None):
"""
    Returns the counts of each type of rating that a rater made
"""
    if min_rating is None:
        min_rating = min(ratings)
    if max_rating is None:
        max_rating = max(ratings)
    num_ratings = int(max_rating - min_rating + 1)
    hist_ratings = [0 for x in range(num_ratings)]
    for r in ratings:
        hist_ratings[r - min_rating] += 1
    return hist_ratings

def quadratic_weighted_kappa(rater_a, rater_b, min_rating=None, max_rating=None):
"""
    Calculates the quadratic weighted kappa
    quadratic_weighted_kappa calculates the quadratic weighted kappa
    value, which is a measure of inter-rater agreement between two raters
    that provide discrete numeric ratings.  Potential values range from -1
    (representing complete disagreement) to 1 (representing complete
    agreement).  A kappa value of 0 is expected if all agreement is due to
    chance.

    quadratic_weighted_kappa(rater_a, rater_b), where rater_a and rater_b
    each correspond to a list of integer ratings.  These lists must have the
    same length.

    The ratings should be integers, and it is assumed that they contain
    the complete range of possible ratings.

    quadratic_weighted_kappa(X, min_rating, max_rating), where min_rating
    is the minimum possible rating, and max_rating is the maximum possible
    rating
"""
    rater_a = np.array(rater_a, dtype=int)
    rater_b = np.array(rater_b, dtype=int)
    assert(len(rater_a) == len(rater_b))
    if min_rating is None:
        min_rating = min(min(rater_a), min(rater_b))
    if max_rating is None:
        max_rating = max(max(rater_a), max(rater_b))
    conf_mat = confusion_matrix(rater_a, rater_b,
                                min_rating, max_rating)
    num_ratings = len(conf_mat)
    num_scored_items = float(len(rater_a))

    hist_rater_a = histogram(rater_a, min_rating, max_rating)
    hist_rater_b = histogram(rater_b, min_rating, max_rating)

    numerator = 0.0
    denominator = 0.0

    for i in range(num_ratings):
        for j in range(num_ratings):
            expected_count = (hist_rater_a[i] * hist_rater_b[j]
                              / num_scored_items)
            d = pow(i - j, 2.0) / pow(num_ratings - 1, 2.0)
            numerator += d * conf_mat[i][j] / num_scored_items
            denominator += d * expected_count / num_scored_items

    return 1.0 - numerator / denominator

def linear_weighted_kappa(rater_a, rater_b, min_rating=None, max_rating=None):
"""
    Calculates the linear weighted kappa
    linear_weighted_kappa calculates the linear weighted kappa
    value, which is a measure of inter-rater agreement between two raters
    that provide discrete numeric ratings.  Potential values range from -1
    (representing complete disagreement) to 1 (representing complete
    agreement).  A kappa value of 0 is expected if all agreement is due to
    chance.

    linear_weighted_kappa(rater_a, rater_b), where rater_a and rater_b
    each correspond to a list of integer ratings.  These lists must have the
    same length.

    The ratings should be integers, and it is assumed that they contain
    the complete range of possible ratings.

    linear_weighted_kappa(X, min_rating, max_rating), where min_rating
    is the minimum possible rating, and max_rating is the maximum possible
    rating
"""
    assert(len(rater_a) == len(rater_b))
    if min_rating is None:
        min_rating = min(rater_a + rater_b)
    if max_rating is None:
        max_rating = max(rater_a + rater_b)
    conf_mat = confusion_matrix(rater_a, rater_b,
                                min_rating, max_rating)
    num_ratings = len(conf_mat)
    num_scored_items = float(len(rater_a))

    hist_rater_a = histogram(rater_a, min_rating, max_rating)
    hist_rater_b = histogram(rater_b, min_rating, max_rating)

    numerator = 0.0
    denominator = 0.0

    for i in range(num_ratings):
        for j in range(num_ratings):
            expected_count = (hist_rater_a[i] * hist_rater_b[j]
                              / num_scored_items)
            d = abs(i - j) / float(num_ratings - 1)
            numerator += d * conf_mat[i][j] / num_scored_items
            denominator += d * expected_count / num_scored_items

    return 1.0 - numerator / denominator

def kappa(rater_a, rater_b, min_rating=None, max_rating=None):
"""
    Calculates the kappa
    kappa calculates the kappa
    value, which is a measure of inter-rater agreement between two raters
    that provide discrete numeric ratings.  Potential values range from -1
    (representing complete disagreement) to 1 (representing complete
    agreement).  A kappa value of 0 is expected if all agreement is due to
    chance.

    kappa(rater_a, rater_b), where rater_a and rater_b
    each correspond to a list of integer ratings.  These lists must have the
    same length.

    The ratings should be integers, and it is assumed that they contain
    the complete range of possible ratings.

    kappa(X, min_rating, max_rating), where min_rating
    is the minimum possible rating, and max_rating is the maximum possible
    rating
"""
    assert(len(rater_a) == len(rater_b))
    if min_rating is None:
        min_rating = min(rater_a + rater_b)
    if max_rating is None:
        max_rating = max(rater_a + rater_b)
    conf_mat = confusion_matrix(rater_a, rater_b,
                                min_rating, max_rating)
    num_ratings = len(conf_mat)
    num_scored_items = float(len(rater_a))

    hist_rater_a = histogram(rater_a, min_rating, max_rating)
    hist_rater_b = histogram(rater_b, min_rating, max_rating)

    numerator = 0.0
    denominator = 0.0

    for i in range(num_ratings):
        for j in range(num_ratings):
            expected_count = (hist_rater_a[i] * hist_rater_b[j]
                              / num_scored_items)
            if i == j:
                d = 0.0
            else:
                d = 1.0
            numerator += d * conf_mat[i][j] / num_scored_items
            denominator += d * expected_count / num_scored_items

    return 1.0 - numerator / denominator

def mean_quadratic_weighted_kappa(kappas, weights=None):
"""
    Calculates the mean of the quadratic
    weighted kappas after applying Fisher's r-to-z transform, which is
    approximately a variance-stabilizing transformation.  This
    transformation is undefined if one of the kappas is 1.0, so all kappa
    values are capped in the range (-0.999, 0.999).  The reverse
    transformation is then applied before returning the result.

    mean_quadratic_weighted_kappa(kappas), where kappas is a vector of
    kappa values

    mean_quadratic_weighted_kappa(kappas, weights), where weights is a vector
    of weights that is the same size as kappas.  Weights are applied in the
    z-space
"""
    kappas = np.array(kappas, dtype=float)
    if weights is None:
        weights = np.ones(np.shape(kappas))
    else:
        weights = weights / np.mean(weights)

    # ensure that kappas are in the range [-.999, .999]
    kappas = np.array([min(x, .999) for x in kappas])
    kappas = np.array([max(x, -.999) for x in kappas])

    z = 0.5 * np.log((1 + kappas) / (1 - kappas)) * weights
    z = np.mean(z)
    return (np.exp(2 * z) - 1) / (np.exp(2 * z) + 1)

def weighted_mean_quadratic_weighted_kappa(solution, submission):
    predicted_score = submission[submission.columns[-1]].copy()
    predicted_score.name = "predicted_score"
    if predicted_score.index[0] == 0:
        predicted_score = predicted_score[:len(solution)]
        predicted_score.index = solution.index
    combined = solution.join(predicted_score, how="left")
    groups = combined.groupby(by="essay_set")
    kappas = [quadratic_weighted_kappa(group[1]["essay_score"], group[1]["predicted_score"]) for group in groups]
    weights = [group[1]["essay_weight"].irow(0) for group in groups]
    return mean_quadratic_weighted_kappa(kappas, weights=weights)

Original: https://www.cnblogs.com/cs-markdown10086/p/16060629.html
Author: NEU_ShuaiCheng
Title: 一致性检验评价方法kappa

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/715287/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

  • 如何高效地写 Form

    工作少不了写”增删改查”,”增删改查”中的”增”和”改”都与 Form 有关,可以说…

    技术杂谈 2023年7月11日
    091
  • 电脑双屏改单屏后看不到文件问题的解决

    之前电脑用的双屏幕,后来改为了单屏幕,发现之前放到另一屏幕上的文件双击打开后看不到,似乎还停留在另一屏幕的位置处。 解决的方法如下: 1.打开对应的文件(此时不要点击其他地方,确保…

    技术杂谈 2023年5月31日
    0191
  • 利用JVM钩子函数优雅关闭线程池

    核心API: shutDown shutDownNow awaitTermination 利用JVM钩子函数,在虚拟机关闭时调用相关方法即”优雅关闭线程池”…

    技术杂谈 2023年7月25日
    056
  • Axon框架快速入门和DDD项目实践

    Axon 框架是基于JVM平台的开源产品,由Allard Buijze于2009年创立。2017年7月,成立了一家独立公司AxonIQ,专门与Axon产品合作。 Axon 框架的程…

    技术杂谈 2023年6月1日
    088
  • 2020 年你读过的书中,印象最深刻的 3 本是什么

    《过得刚好》、《真希望我父母读过这本书》、《CSS世界》。 这是亚马逊 kindle 发起的一个话题活动。我把自己的留言在博客中也记录一下。 从这本书了解了郭德纲。当年反对郭德纲的…

    技术杂谈 2023年7月10日
    067
  • 动态规划-摘花生

    Hello Kitty想摘点花生送给她喜欢的米老鼠。 她来到一片有网格状道路的矩形花生地(如下图),从西北角进去,东南角出来。 地里每个道路的交叉点上都有种着一株花生苗,上面有若干…

    技术杂谈 2023年7月11日
    083
  • 十四、集合(完结)

    十四、集合 14.1 集合的引入及好处 前面我们保存多个数据使用的是数组,那么数组有不足的地方,我们分析一下 14.1.1 数组的缺陷 数组的长度声明时候就固定好了,无法修改 数组…

    技术杂谈 2023年7月11日
    081
  • Crossref是什么?Crossref怎么查DOI?

    参考文献在学术写作中具有极为重要的地位,原因有三。 首先,科研人员的学术影响力或水平通常用发表论著的数量和质量来衡量,而质量往往依靠被引用的次数来表征,虽然引用次数并不能完全代表学…

    技术杂谈 2023年5月30日
    0129
  • 链表(单链表)的多种功能实现

    单链表 简介 单链表的多种功能实现 代码 点击查看代码 #include<stdio.h> #include<stdlib.h> #include<m…

    技术杂谈 2023年7月11日
    092
  • 多项式求逆

    已知一度数为n的多项式(A(x))。 [A(x)B(x)\equiv1\pmod {x^n} ] (B(x))即为(A(x))的逆元。 多项式的除法、(\exp)和(\ln)都是基…

    技术杂谈 2023年6月21日
    097
  • Mysql底层索引使用B+树(数据结构学习感悟)

    Mysql底层索引使用B+树(数据结构学习感悟) 注:本文仅代表个人观点,没有任何依据,如有错误,敬请斧正 考研学习数据结构,有了比之前更深的认识,或者说数据结构运用无处不在,如H…

    技术杂谈 2023年7月11日
    054
  • 努力的去帮助他人

    天道运而无所积,故万物成;帝道运而无所积,故天下归;–庄子《天道篇》 知识分享才能成长,财富流动才能更多;努力的去帮助每一个人,自己也会收获更多的快乐;不要为生活琐事在…

    技术杂谈 2023年7月23日
    067
  • 技术管理进阶——团队一盘散沙,怎么破?

    原创不易,求分享、求一键三连 最近有个粉丝问了一道 大题: 小钗,我最近空降到一个小公司做技术负责人,感觉团队士气很低,同学们要么有力无处使,要么常规摸鱼,这种一盘散沙的团队该如何…

    技术杂谈 2023年6月1日
    085
  • 使用 Playwright 进行 E2E 测试

    1 Playwright 简介 Playwright 是由微软开源的一个Web测试和自动化的框架,它具有以下特性: 跨平台 跨浏览器 跨语言(TypeScript、JS、Pytho…

    技术杂谈 2023年5月31日
    0106
  • nodejs目录与文件遍历

    路径相关函数 path.basename(‘/foo/bar/baz/asdf/quux.html’); // Returns: ‘quux.html’ path.basename…

    技术杂谈 2023年5月31日
    098
  • 小熊飞桨练习册-05水果数据集

    文件说明 文件 说明 train.py 训练程序 test.py 测试程序 test-gtk.py 测试程序 GTK 界面 report.py 报表程序 onekey.sh 一键获…

    技术杂谈 2023年7月23日
    0102
亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球