torchtext 0.9是一个API的大更新,0.12又是一个大更新
但是官网教程门槛太高了,这里参考文档,举几个小例子,把学习门槛降下来:
注意:如果大家有学习过 transfomer
里的类,那么需要注意的是,torchtext能一个函数做成的只有很少很少的基本功能,没法自由把token
1. word 转 token
首先构造一个自定义的 vocab
:
from torchtext.vocab import vocab
from collections import Counter, OrderedDict
sentence = "Natural language processing strives to build machines that understand and respond to text or voice data and respond with text or speech of their own in much the same way humans do."
sentences_list = sentence.split(" ")
counter = Counter(sentences_list)
sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)
ordered_dict = OrderedDict(sorted_by_freq_tuples)
my_vocab = vocab(ordered_dict, specials=["", ""])
最后使用:
print("单词and的token", my_vocab['and'])
即可得到token,为3
2. token 与 word 的对应关系
word对应的token可以通过:
print("word->token:", my_vocab.get_stoi())
同样token->word:
print("token->word:", my_vocab.get_itos())
3. 句子 -> token
这里需要引入一个新的类 VocabTransform
,然后使用如下示例即可:
from torchtext.transforms import VocabTransform
vocab_transform = VocabTransform(my_vocab)
trans_token = vocab_transform([
["language", "processing"],
["second", "understand", "and", "respond", "to", "text"],
["wa", "ka", "ka", "ka", "ka", "ka"]])
print("转换后的token:", trans_token)
4. 句子定长截断
同样引入新的类 Truncate
from torchtext.transforms import Truncate
truncate = Truncate(max_seq_len=3)
truncate_token = truncate(trans_token)
print("截断后的token(最长为3)", truncate_token)
5. 批量修改句子的token
使用 AddToken
可以在开头或结尾为每个句子都添加一个想要的token
from torchtext.transforms import AddToken
begin_token = AddToken(token=1000, begin=True)
end_token = AddToken(token=-10000, begin=False)
add_end_token = end_token(begin_token(truncate_token))
print('动态修改token后的结果(收尾添加特殊字符):', add_end_token)
from torchtext.vocab import vocab
from collections import Counter, OrderedDict
sentence = "Natural language processing strives to build machines that understand and respond to text or voice data and respond with text or speech of their own in much the same way humans do."
sentences_list = sentence.split(" ")
counter = Counter(sentences_list)
sorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)
ordered_dict = OrderedDict(sorted_by_freq_tuples)
my_vocab = vocab(ordered_dict, specials=["", ""])
my_vocab.set_default_index(-1)
print("单词 and 的token:", my_vocab['and'])
print("单词 apple 的token(语料中木有这个词):", my_vocab['apple'])
from torchtext.transforms import VocabTransform
vocab_transform = VocabTransform(my_vocab)
trans_token = vocab_transform([
["language", "processing"],
["second", "understand", "and", "respond", "to", "text"],
["wa", "ka", "ka", "ka", "ka", "ka"]])
print("转换后的token:", trans_token)
print("token->word:", my_vocab.get_itos())
print("word->token:", my_vocab.get_stoi())
from torchtext.transforms import Truncate
truncate = Truncate(max_seq_len=3)
truncate_token = truncate(trans_token)
print("截断后的token(最长为3)", truncate_token)
from torchtext.transforms import AddToken
begin_token = AddToken(token=1000, begin=True)
end_token = AddToken(token=-10000, begin=False)
add_end_token = end_token(begin_token(truncate_token))
print('动态修改token后的结果(收尾添加特殊字符):', add_end_token)
Original: https://blog.csdn.net/weixin_35757704/article/details/124409718
Author: 呆萌的代Ma
Title: Torchtext 0.12+新版API学习与使用示例(1)
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/531345/
转载文章受原作者版权保护。转载请注明原作者出处!