CMU SDK-mosi多模态数据使用笔记(一)

CMU多模态数据

1 下载数据
在数据及中包含了三个部分:highlevel,raw以及labels。highlevel是已经经过处理的特征(利用facet以及openSMILE等工具进行抽取),raw是原始特征。由于目前SDK并不能够自动检测是否已经下载过数据集,如果当你有下载了然后要再从晚上downloading的话,会报错,因此需要加入一个try…except。代码片段如下

// An highlighted block
from mmsdk import mmdatasdk as md
DATASET = md.cmu_mosi
// 下载
try:
    md.mmdataset(DATASET.highlevel,DATA_PATH)
except:
    print('have been downloaded')

2 看一下下载好的文件
我们可以看到在DATA_PATH的文件路径中,都是以.csd结尾的文件,这是SDK中的一种称之为计算序列(computational sequences)的数据结构。

3 载入多模态数据
构建一个字典,格式为 {modality_file_name, csd_path}然后再传到md包里面,构建一个数据集

visual_field = 'CMU_MOSI_Visual_Facet_41.csd'
acoustic_field = 'CMU_MOSI_COVAREP.csd'
text_field = 'CMU_MOSI_ModifiedTimestampedWords.csd'

features = [
    text_field,
    visual_field,
    acoustic_field
]
recipe = {feat: os.path.join(DATA_PATH, feat)  for feat in features}
dataset = md.mmdataset(recipe)

4 看一下这个数据集

  • 每一个dataset由三个模态的计算序列组成,在每一个计算序列里面,包含了多个视频,且每一个计算序列包含的id个数一致。如下图所示
    CMU SDK-mosi多模态数据使用笔记(一)
  • 这个 dataset的key是上面三个modality_file_name。而在每个模态里面,也是一个字典,key是视频的id,而value是一个元组——(feature,intervals),后者是表示每一个时间戳开始和结束的时间。
print(list(dataset.keys()))
print("=" * 50)

print(list(dataset[visual_field].keys())[:10])
print("=" * 50)

some_id = list(dataset[visual_field].keys())[15]
print(list(dataset[visual_field][some_id].keys()))
print("=" * 50)

print(list(dataset[visual_field][some_id]['intervals'].shape))

print("=" * 50)

print(list(dataset[visual_field][some_id]['features'].shape))
print(list(dataset[text_field][some_id]['features'].shape))
print(list(dataset[acoustic_field][some_id]['features'].shape))

print("Different modalities have different number of time steps!")

5 对齐不同的time step
主要思想就是:将其他的模态的对齐到文本模态上,使得所有模态的time step长度是一致的。首先将其他模态的特征放到一个”桶”中,然后对这个进行处理,这里用到的函数叫做 collapse function。主要是作了pooling的操作


def avg(intervals: np.array, features: np.array) -> np.array:

    try:
        return np.average(features, axis=0)
    except:
        return features

dataset.align(text_field, collapse_functions=[avg])

注意,对齐之后,视频的id发生了变化,原来的id变成了id[seg]

6 将标签给对齐中

CMU SDK-mosi多模态数据使用笔记(一)
我们这个时候的目标是要将labels加到数据集中,其中label也是一个计算序列。
label_field = 'CMU_MOSI_Opinion_Labels'

label_recipe = {label_field: os.path.join(DATA_PATH, label_field + '.csd')}
dataset.add_computational_sequences(label_recipe, destination=None)
dataset.align(label_field)

7 分割数据集
SDK会分配每一个视频的id给我们让我们分割train/test/dev set。但是我在对齐之后已经将id变成了id[seg],因此我们需要利用训练来匹配出每一个id并且将数据放到相应的数据集中。并且,对于每一个特征都利用了z-normalization,并且将文本用唯一的id来替代


train_split = DATASET.standard_folds.standard_train_fold
dev_split = DATASET.standard_folds.standard_valid_fold
test_split = DATASET.standard_folds.standard_test_fold

from collections import defaultdict
word2id = defaultdict(lambda: len(word2id))
UNK = word2id['']
PAD = word2id['']

train = []
test = []
dev = []

pattern = re.compile('(.*)\[.*\]')
num_drop = 0
for segment in dataset[label_field].keys():
    vid = re.search(pattern, segment).group(1)
    label = dataset[label_field][segment]['features']
    _words = dataset[text_field][segment]['features']
    _visual = dataset[visual_field][segment]['features']
    _acoustic = dataset[acoustic_field][segment]['features']

    if not (_words.shape[0] == _visual.shape[0] == _acoustic.shape[0]):
        print('the length of these modalities is different,drop!')
        num_drop += 0
        continue

    label = np.nan_to_num(label)
    _visual = np.nan_to_num(_visual)
    _acoustic = np.nan_to_num(_acoustic)

    words = []
    visual = []
    acoustic = []
    for i, word in enumerate(_words):
        if(word[0] != b'sp'):

            words.append(word2id[word.decode('utf-8')])
            visual.append(_visual)
            _acoustic.append(acoustic)

    words = np.asarray(words)
    visual = np.asarray(visual)
    acoustic = np.asarray(acoustic)

    visual = np.nan_to_num((visual - visual.mean(0, keepdims=True)) / (EPS + np.std(visual, axis=0, keepdims=True)))
    acoustic = np.nan_to_num((acoustic - acoustic.mean(0, keepdims=True)) / (EPS + np.std(acoustic, axis=0, keepdims=True)))

    if vid in train_split:
        train.append(((words, visual, acoustic),label, segment))
    elif vid in dev_split:
        dev.append(((words, visual, acoustic), label, segment))
    elif vid in test_split:
        test.append(((words, visual, acoustic), label, segment))
    else:
        print(f"Found video that doesn't belong to any splits: {vid}")

print(f"Total number of {num_drop} datapoints have been dropped.")

def return_unk():
    return UNK
word2id.defalut_factory = return_unk

CMU SDK-mosi多模态数据使用笔记(一)
8 pytorch中的collate function以及构建DataLoader
我们已经得到了train/test/dev set,他们的格式是list。在pytorch中,可以使用collate_functions来从数据及中收集批量数据。
def multi_collate(batch):
    '''
    Collate functions assume batch = [Dataset[i] for i in index_set]
    '''

    batch = sorted(batch, key=lambda x: x[0][0].shape[0], reverse=True)

    labels = torch.cat([torch.from_numpy(sample[1]) for sample in batch], dim=0)

    sentences = pad_sequence([torch.LongTensor(sample[0][0]) for sample in batch], padding_value=PAD)
    visual = pad_sequence([torch.FloatTensor(sample[0][1]) for sample in batch])
    acoustic = pad_sequence([torch.FloatTensor(sample[0][2]) for sample in batch])

    lengths = torch.LongTensor([sample[0][0].shape[0] for sample in batch])
    return sentences, visual, acoustic, labels, lengths

batch_sz = 56
train_loader = DataLoader(train, shuffle=True, batch_size=batch_sz, collate_fn=multi_collate)
dev_loader = DataLoader(dev, shuffle=False, batch_size=batch_sz*3, collate_fn=multi_collate)
test_loader = DataLoader(test, shuffle=False, batch_size=batch_sz*3, collate_fn=multi_collate)

temp_loader = iter(DataLoader(test, shuffle=True, batch_size=8, collate_fn=multi_collate))
batch = next(temp_loader)

print(batch[0].shape)
print(batch[1].shape)
print(batch[2].shape)
print(batch[3])
print(batch[4])

Original: https://blog.csdn.net/Bourne1/article/details/114480999
Author: Bourne
1
Title: CMU SDK-mosi多模态数据使用笔记(一)

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/544894/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球