Elasticsearch:按类型删除分词

2023年4月22日   |   by mebius

在我之前的文章 “Elasticsearch:分词器中的 token 过滤器使用示例”,我有很多示例展示如何使用分词器中的过滤器来对分词进行过滤。在今天的文章中,我将展示如何使用另外一种过滤器根据类型来保留或者移除一些分词。

保留类型分词过滤器能够跨类型保留或删除分词。 让我们想象一下项目描述字段,通常这个字段接收带有单词和数字的文本。 为所有文本生成分词可能没有意义,为了避免这种情况,我们将使用 Keep 类型分词过滤器。

删除数字标记

要删除数字类型,请将 “types” 参数设置为 “”,此参数接受一个标记列表。 “mode” 参数设置为 “exclude”。

例子:

GET _analyze
{
  "tokenizer": "standard",
  "filter": [
    {
      "type": "keep_tgcodetypes",
      "types": [ "" ],
      "mode": "exclude"
    },
    {
      "type": "stop"
    }
  ],
  "text": "The German philosopher and economist Karl Marx was born on May 5, 1818."
}

上面命令返回的分词为:

{
  "tokens": [
    {
      "token": "The",
      "start_offset": 0,
      "end_offset": 3,
      "type": "",
      "position": 0
    },
    {
      "token": "German",
      "start_offset": 4,
      "end_offset": 10,
      "type": "",
      "position": 1
    },
    {
      "token": "philosopher",
      "start_offset": 11,
      "end_offset": 22,
      "type": "",
      "position": 2
    },
    {
      "token": "economist",
      "start_offset": 27,
      "end_offset": 36,
      "type": "",
      "position": 4
    },
    {
      "token": "Karl",
      "start_offset": 37,
      "end_offset": 41,
      "type": "",
      "position": 5
    },
    {
      "token": "Marx",
      "start_offset": 42,
      "end_offset": 46,
      "type": "",
      "position": 6
    },
    {
      "token"tgcode: "born",
      "start_offset": 51,
      "end_offset": 55,
      "type": "",
      "position": 8
    },
    {
      "token": "May",
      "start_offset": 59,
      "end_offset": 62,
      "type": "",
      "position": 10
    }
  ]
}

从上面的输出中,我们可以看出来所以的数字分词都被移除了。

我们也可以尝试使用如下的命令来保留数字:

GET _analyze
{
  "tokenizer": "standard",
  "filter": [
    {
      "type": "keep_types",
      "types": [ "" ],
      "mode": "include"
    },
    {
      "type": "stop"
    }
  ],
  "text": "The German philosopher and economist Karl Marx was born on May 5, 1818."
}

上面的分词为:

{
  "tokens": [
    {
      "token": "5",
      "start_offset": 63,
      "end_offset": 64,
      "type": "",
      "position": 11
    },
    {
      "token": "1818",
      "start_offset": 66,
      "end_offset": 70,
      "type": "",
      "position": 12
    }
  ]
}

删除 aphanumeric 分词

要删除文本,我们只需将 “types” 字段设置为“”。

GET _analyze
{
  "tokenizer": "standard",
  "filter": [
    {
      "type": "keep_types",
      "types": [ "" ],
      "mode": "exclude"
    },
    {
      "type": "stop"
    }
  ],
  "text": "The German philosopher and economist Karl Marx was born on May 5, 1818."
}

现在我们只有数字分词。

{
  "tokens": [
    {
      "token": "5",
      "start_offset": 63,
      "end_offset": 64,
tgcode      "type": "",
      "position": 11
    },
    {
      "token": "1818",
      "start_offset": 66,
      "end_offset": 70,
      "type": "",
      "position": 12
    }
  ]
}

文章来源于互联网:Elasticsearch:按类型删除分词

相关推荐: Elasticsearch:跟踪 ElasticSearch 日志摄取中的缓慢

我们想跟踪日志的摄取是否有超出我们 Elasticsearch 可接受延迟的额外延迟。 因此,我们已按照之前文章 “Elasticsearch:在 Elasticsearch 中计算摄取延迟并存储摄取时间以提高可观察性” 中提供的步骤进行操作。 1. 创建如下…

Tags: