背景

因为要使用es的告警功能,而告警功能是收费版本,那么就破解白金级吧。
ttBdLf
MkPJeq

原理

license中有个signature字段,ES会根据这个字段判断License是否被篡改。只要取消ES的这个判断逻辑,就可以随便篡改License,达到激活的目的了。

我是基于 官方 ES Docker 镜像 7.13.0 版本进行破解的。原则上支持任意版本破解

破解

提取文件

1
ES_HOME/modules/x-pack-core/x-pack-core-7.13.0.jar

获取Jar包查看工具Luyten,你可以可以使用其他的工具,GitHub

然后打开x-pack-core-7.13.0.jar这个文件:

定位到两个文件:然后点击File–Save As 另存为java源码文件:
A4wAn0

修改源码

org.elasticsearch.license/LicenseVerifier.class 另存后:LicenseVerifier.java
OkpSiU

LicenseVerifier.java 修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
package org.elasticsearch.license;

import java.nio.*;
import org.elasticsearch.common.bytes.*;
import java.security.*;
import java.util.*;
import org.elasticsearch.common.xcontent.*;
import org.apache.lucene.util.*;
import org.elasticsearch.core.internal.io.*;
import java.io.*;

public class LicenseVerifier
{
public static boolean verifyLicense(final License license, final byte[] publicKeyData) {
/* 注释掉这一大段
byte[] signedContent = null;
byte[] publicKeyFingerprint = null;
try {
final byte[] signatureBytes = Base64.getDecoder().decode(license.signature());
final ByteBuffer byteBuffer = ByteBuffer.wrap(signatureBytes);
final int version = byteBuffer.getInt();
final int magicLen = byteBuffer.getInt();
final byte[] magic = new byte[magicLen];
byteBuffer.get(magic);
final int hashLen = byteBuffer.getInt();
publicKeyFingerprint = new byte[hashLen];
byteBuffer.get(publicKeyFingerprint);
final int signedContentLen = byteBuffer.getInt();
signedContent = new byte[signedContentLen];
byteBuffer.get(signedContent);
final XContentBuilder contentBuilder = XContentFactory.contentBuilder(XContentType.JSON);
license.toXContent(contentBuilder, (ToXContent.Params)new ToXContent.MapParams((Map)Collections.singletonMap("license_spec_view", "true")));
final Signature rsa = Signature.getInstance("SHA512withRSA");
rsa.initVerify(CryptUtils.readPublicKey(publicKeyData));
final BytesRefIterator iterator = BytesReference.bytes(contentBuilder).iterator();
BytesRef ref;
while ((ref = iterator.next()) != null) {
rsa.update(ref.bytes, ref.offset, ref.length);
}
return rsa.verify(signedContent);
}
catch (IOException ex) {}
catch (NoSuchAlgorithmException ex2) {}
catch (SignatureException ex3) {}
catch (InvalidKeyException e) {
throw new IllegalStateException(e);
}
finally {
if (signedContent != null) {
Arrays.fill(signedContent, (byte)0);
}
}
*/
return true; // 增加这行
}

public static boolean verifyLicense(final License license) {
/* 注释掉这一大段
byte[] publicKeyBytes;
try {
final InputStream is = LicenseVerifier.class.getResourceAsStream("/public.key");
try {
final ByteArrayOutputStream out = new ByteArrayOutputStream();
Streams.copy(is, (OutputStream)out);
publicKeyBytes = out.toByteArray();
if (is != null) {
is.close();
}
}
catch (Throwable t) {
if (is != null) {
try {
is.close();
}
catch (Throwable t2) {
t.addSuppressed(t2);
}
}
throw t;
}
}
catch (IOException ex) {
throw new IllegalStateException(ex);
}
return verifyLicense(license, publicKeyBytes);
*/
return true; // 增加这行
}
}

org.elasticsearch.xpack.core/XPackBuild.class 另存后:XPackBuild.java
LAFsVF

XPackBuild.java 修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
package org.elasticsearch.xpack.core;

import org.elasticsearch.common.io.*;
import java.net.*;
import org.elasticsearch.common.*;
import java.nio.file.*;
import java.io.*;
import java.util.jar.*;

public class XPackBuild
{
public static final XPackBuild CURRENT;
private String shortHash;
private String date;

@SuppressForbidden(reason = "looks up path of xpack.jar directly")
static Path getElasticsearchCodebase() {
final URL url = XPackBuild.class.getProtectionDomain().getCodeSource().getLocation();
try {
return PathUtils.get(url.toURI());
}
catch (URISyntaxException bogus) {
throw new RuntimeException(bogus);
}
}

XPackBuild(final String shortHash, final String date) {
this.shortHash = shortHash;
this.date = date;
}

public String shortHash() {
return this.shortHash;
}

public String date() {
return this.date;
}

static {
final Path path = getElasticsearchCodebase();
String shortHash = null;
String date = null;
Label_0109: {
/* 注释掉这一大段即可
if (path.toString().endsWith(".jar")) {
try {
final JarInputStream jar = new JarInputStream(Files.newInputStream(path, new OpenOption[0]));
try {
final Manifest manifest = jar.getManifest();
shortHash = manifest.getMainAttributes().getValue("Change");
date = manifest.getMainAttributes().getValue("Build-Date");
jar.close();
}
catch (Throwable t) {
try {
jar.close();
}
catch (Throwable t2) {
t.addSuppressed(t2);
}
throw t;
}
break Label_0109;
}
catch (IOException e) {
throw new RuntimeException(e);
}
}
*/
shortHash = "Unknown";
date = "Unknown";
}
CURRENT = new XPackBuild(shortHash, date);
}
}

java源代码已经更改完毕,下面就是生成class文件,然后替换原来的class文件即可:

生成Class文件

执行这段脚本,就可以得到2个Java代码对应的class文件

1
2
3
4
5
6
7
8
ES_HOME="/usr/share/elasticsearch"
ES_JAR=$(cd $ES_HOME && ls lib/elasticsearch-[0-9]*.jar)
ESCORE_JAR=$(cd $ES_HOME && ls lib/elasticsearch-core-*.jar)
LUCENE_JAR=$(cd $ES_HOME && ls lib/lucene-core-*.jar)
XPACK_JAR=$(cd $ES_HOME && ls modules/x-pack-core/x-pack-core-*.jar)

javac -cp "${ES_HOME}/${ES_JAR}:${ES_HOME}/${LUCENE_JAR}:${ES_HOME}/${XPACK_JAR}:${ES_HOME}/${ESCORE_JAR}" LicenseVerifier.java
javac -cp "${ES_HOME}/${ES_JAR}:${ES_HOME}/${LUCENE_JAR}:${ES_HOME}/${XPACK_JAR}:${ES_HOME}/${ESCORE_JAR}" XPackBuild.java

解压重新替换并打包

我们把$ES_HOME/modules/x-pack-core/x-pack-core-7.13.0.jar提取出来,放到一个临时的/elk/x-pack目录中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ export ES_HOME="/usr/share/elasticsearch"
$ cp $ES_HOME/modules/x-pack-core/x-pack-core-7.13.0.jar /elk/x-pack
$ cd /elk/x-pack
# 解压x-pack-core-7.13.0.jar
$ jar -xvf x-pack-core-7.13.0.jar

# 替换.class文件
$ cp /root/XPackBuild.class /elk/x-pack/org/elasticsearch/xpack/core/
$ cp /root/LicenseVerifier.class /elk/x-pack/org/elasticsearch/license/

# 重新打包生成x-pack-core-7.13.0.jar文件
$ cd /elk/x-pack
$ rm -rf x-pack-core-7.13.0..jar # 删除临时拷贝过来的源文件
$ jar cvf x-pack-core-7.13.0..jar .

申请License并修改导入

去官网申请License

我们将申请下来的License中的type改为platinum,将expiry_date_in_millis延长N年时间:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"license": {
"uid": "92c6b41e-59f9-4674-b227-77063c5fa8b0",
"type": "platinum",
"issue_date_in_millis": 1642291200000,
"expiry_date_in_millis": 3107746200000,
"max_nodes": 100,
"issued_to": "junyao hong (race)",
"issuer": "Web Form",
"signature": "AAAAAwAAAA0kxge9SLSAvWWnMgDEAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxaktJRVl5MUYvUWh3bHZVUTllbXNPbzBUemtnbWpBbmlWRmRZb25KNFlBR2x0TXc2K2p1Y1VtMG1UQU9TRGZVSGRwaEJGUjE3bXd3LzRqZ05iLzRteWFNekdxRGpIYlFwYkJiNUs0U1hTVlJKNVlXekMrSlVUdFIvV0FNeWdOYnlESDc3MWhlY3hSQmdKSjJ2ZTcvYlBFOHhPQlV3ZHdDQ0tHcG5uOElCaDJ4K1hob29xSG85N0kvTWV3THhlQk9NL01VMFRjNDZpZEVXeUtUMXIyMlIveFpJUkk2WUdveEZaME9XWitGUi9WNTZVQW1FMG1DenhZU0ZmeXlZakVEMjZFT2NvOWxpZGlqVmlHNC8rWVVUYzMwRGVySHpIdURzKzFiRDl4TmM1TUp2VTBOUlJZUlAyV0ZVL2kvVk10L0NsbXNFYVZwT3NSU082dFNNa2prQ0ZsclZ4NTltbU1CVE5lR09Bck93V2J1Y3c9PQAAAQBAD9GxJeiZQonVdEVrn5+frA3tMD18Stcp3fiHVGVdXRzbHQd3N23tTXSyXlqQo0lB/dDt1A4iKh8/Wotp38mFkYq/W/HbJC3hYkJaOQwBPO0aelWYTi4hAxw7c8HSjLf2S4J0dK7LYRW9vfuaK/YrCr42fOGsZ3GX+9WcwbBWT6ONnaJa2dMQRnDsrmcE599LiEz++8GvICWhzfGxjcHk4lsEGmFBC1FajDQsGf/d7oCI3EiNodgSMHtP3u6DZCt8h036wn4gyv5XdH3YauUltsKDmYqGFfD/Udy4kmiKR5qExX4i/K+7q+p4TVJ3GHqgVwtdXGkKiq32qXEqktj6",
"start_date_in_millis": 1642291200000
}
}

好了license.json文件已经OK了.

接下来导入License请确保

1
2
xpack.security.enabled: false
xpack.security.transport.ssl.enabled: false

等更新完升级为白金后再开启配置。

然后加载License到ES中:

1
2
3
$ curl -XPUT -u elastic 'http://localhost:9200/_xpack/license' -H "Content-Type: application/json" -d @license.json
Enter host password for user 'elastic': # 输入elastic用户密码
{"acknowledged":true,"license_status":"valid"} # license写入成功

查看License:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl -XGET -uelastic http://localhost:9200/_license
Enter host password for user 'elastic':
{
"license" : {
"status" : "active",
"uid" : "92c6b41e-59f9-4674-b227-77063c5fa8b0",
"type" : "platinum",
"issue_date" : "2019-11-29T00:00:00.000Z",
"issue_date_in_millis" : 1558051200000,
"expiry_date" : "2068-06-24T14:50:00.999Z",
"expiry_date_in_millis" : 2524579200999,
"max_nodes" : 1000,
"issued_to" : "pyker",
"issuer" : "Web Form",
"start_date_in_millis" : 1558051200000
}
}

最后,确保 elasticsearch 和 kibana均重启。单独elasticsearch,不重启kibana,会导致进入kibana时候提示 license无效。

kibana查看许可

RyWe4z

专题目录

ElasticStack-安装篇
ElasticStack-elasticsearch篇
ElasticStack-logstash篇
elasticSearch-mapping相关
elasticSearch-分词器介绍
elasticSearch-分词器实践笔记
elasticSearch-同义词分词器自定义实践
docker-elk集群实践
filebeat与logstash实践
filebeat之pipeline实践
Elasticsearch 7.x 白金级 破解实践
elk的告警调研与实践

背景

我们上次讲filebeat之pipeline实践,用filebeat采集到了es,那么错误日志是不断实时采集上来了,可是能否在出现某种异常的时候能通知告警一下呢,比如通过企业微信机器人通知我们一下,通过短信邮箱通知我们一下?那么我们来调研实践一下elk的告警功能。

kibana Alerting

收费功能,在kibana中现在已经集成了 kibana Alerting功能
破解可查看 Elasticsearch 7.x 白金级 破解实践

2Kpl30

  • Alerts and Actions(规则和连接器)
    Alerts 是运行在 Kibana 的服务, 把一些复杂的条件都隐藏起来功能也较简单,Watcher 提供更复杂条件查找,也可以通过 DSL 设置更复杂的条件。
  • Watcher(监听器)
    Watcher 是运行于 Elasticsearch

Alerts and Actions(规则和连接器)

因为只支持简单的可视化添加规则,暂不做深入。

Watcher(监听器)

一个 watcher 由5个部分组成

1
2
3
4
5
6
7
{
"trigger": {},
"input": {},
"condition": {},
"transform" {},
"actions": {}
}

trigger

这个定义多长时间 watcher 运行一次。比如我们可以定义如下:

1
2
3
4
5
6
7
8
9
"trigger": {
"schedule": {
"daily": {
"at": [
"9:45" //  其实是东八区 17:45
]
}
}
}

这里要注意一下,如果定义的是cron或者具体某个时间,请务必采用UTC时间定义。也就是当前时间-8小时。因为trigger目前只支持utc时间

lMSz6I
2ARF75
相关链接
https://www.elastic.co/guide/en/elasticsearch/reference/7.16/trigger-schedule.html
https://github.com/elastic/elasticsearch/issues/34659

input

input 获取你要评估的数据。要定期搜索日志数据,如查询当天的数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"<vpn-log-{now/d{YYYY-MM-dd}}>"
],
"rest_total_hits_as_int": true,
"body": {
"size": 0,
"query": {
"bool": {
"filter": {
"range": {
"@timestamp": {
"gte": "now/d",
"lte": "now",
"time_zone": "+08:00"
}
}
}
}
}
}
}
}
}

condition

condition 评估你加载到 watch 中的数据的触发要求,不如总数大于0

1
2
3
4
5
6
7
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gt": 0
}
}
},

transform

讲transform的数据装载到ctx.payload,可以不与input一样,这样我们就能在action去拿到我们要进行通知的内容了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
"transform": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"<vpn-log-{now/d{YYYY-MM-dd}}>"
],
"rest_total_hits_as_int": true,
"body": {
"query": {
"bool": {
"filter": {
"range": {
"@timestamp": {
"gte": "now/d",
"lte": "now",
"time_zone": "+08:00"
}
}
}
}
},
"aggs": {
"topn": {
"terms": {
"field": "tags"
},
"aggs": {
"source_ip_topn": {
"terms": {
"field": "source_ip"
}
}
}
}
}
}
}
}
}

actions

但是 Watcher 真正的强大在于能够在满足 watch 条件的时候做一些事情。 watch 的操作定义了当 watch 条件评估为真时要做什么。 你可以发送电子邮件、调用第三方 webhook、将文档写入 Elasticsearch 索引或将消息记录到标准 Elasticsearch 日志文件中。这里我们来发一个企业微信机器人webhook

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
"actions": {
"wecom_webhook": {
"webhook": {
"scheme": "https",
"host": "qyapi.weixin.qq.com",
"port": 443,
"method": "post",
"path": "/cgi-bin/webhook/send",
"params": {
"key": "XXX"
},
"headers": {
"Content-Type": "application/json"
},
"body": """{"msgtype":"text","text":{"content":"【vpn监控-每日异常汇总】 - 今日当前共{{ctx.payload.hits.total}}条错误异常\n\n 问题排行:\n\n{{#ctx.payload.aggregations.topn.buckets}} - {{key}} {{doc_count}}次\n{{#source_ip_topn.buckets}} \t {{key}} {{doc_count}}次\n{{/source_ip_topn.buckets}}\n{{/ctx.payload.aggregations.topn.buckets}}\n\n请查看Dashboard定位问题:http://it.dakewe.com/goto/fc2c30d43913c3bc066fd5b470b47953\n账号/密码:public_viewer"}}"""
}
}
}

完整示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
{
"trigger": {
"schedule": {
"daily": {
"at": [
"9:45"
]
}
}
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"<vpn-log-{now/d{YYYY-MM-dd}}>"
],
"rest_total_hits_as_int": true,
"body": {
"size": 0,
"query": {
"bool": {
"filter": {
"range": {
"@timestamp": {
"gte": "now/d",
"lte": "now",
"time_zone": "+08:00"
}
}
}
}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gt": 0
}
}
},
"actions": {
"wecom_webhook": {
"webhook": {
"scheme": "https",
"host": "qyapi.weixin.qq.com",
"port": 443,
"method": "post",
"path": "/cgi-bin/webhook/send",
"params": {
"key": "XXX"
},
"headers": {
"Content-Type": "application/json"
},
"body": """{"msgtype":"text","text":{"content":"【vpn监控-每日异常汇总】 - 今日当前共{{ctx.payload.hits.total}}条错误异常\n\n 问题排行:\n\n{{#ctx.payload.aggregations.topn.buckets}} - {{key}} {{doc_count}}次\n{{#source_ip_topn.buckets}} \t {{key}} {{doc_count}}次\n{{/source_ip_topn.buckets}}\n{{/ctx.payload.aggregations.topn.buckets}}\n\n请查看Dashboard定位问题:http://it.dakewe.com/goto/fc2c30d43913c3bc066fd5b470b47953\n账号/密码:public_viewer"}}"""
}
}
},
"transform": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"<vpn-log-{now/d{YYYY-MM-dd}}>"
],
"rest_total_hits_as_int": true,
"body": {
"query": {
"bool": {
"filter": {
"range": {
"@timestamp": {
"gte": "now/d",
"lte": "now",
"time_zone": "+08:00"
}
}
}
}
},
"aggs": {
"topn": {
"terms": {
"field": "tags"
},
"aggs": {
"source_ip_topn": {
"terms": {
"field": "source_ip"
}
}
}
}
}
}
}
}
}
}

添加和模拟 Watcher

我们可以从kibana进行watcher的创建和模拟。

1OrAOs
UWAQg9

专题目录

ElasticStack-安装篇
ElasticStack-elasticsearch篇
ElasticStack-logstash篇
elasticSearch-mapping相关
elasticSearch-分词器介绍
elasticSearch-分词器实践笔记
elasticSearch-同义词分词器自定义实践
docker-elk集群实践
filebeat与logstash实践
filebeat之pipeline实践
Elasticsearch 7.x 白金级 破解实践
elk的告警调研与实践

由于新版本的gitlab默认使用puma代替unicorn。如果你的配置文件里面以前启动了uncorn的设置,那么就会出现puma和unicorn冲突的问题。解决方法就是把gitlab.rb中的unicorn的配置改为puma相关配置即可。

旧版:

1
2
3
4
5
6
7
8
9
10
11
unicorn['enable'] = true
unicorn['worker_timeout'] = 60
unicorn['worker_processes'] = 2

unicorn['worker_memory_limit_min'] = "300 * 1 << 20"
unicorn['worker_memory_limit_max'] = "500 * 1 << 20"

sidekiq['concurrency'] = 16

postgresql['shared_buffers'] = "256MB"
postgresql['max_worker_processes'] = 8

新版

1
2
3
4
5
6
7
8
puma['enable'] = true
puma['worker_timeout'] = 60
puma['worker_processes'] = 2
puma['max_threads'] = 4
puma['per_worker_max_memory_mb'] = 1024
sidekiq['max_concurrency'] = 16
postgresql['shared_buffers'] = "256MB"
postgresql['max_worker_processes'] = 8

参照表

jira

JIRA 是一个缺陷跟踪管理系统,为针对缺陷管理、任务追踪和项目管理的商业性应用软件,开发者是澳大利亚的Atlassian。JIRA这个名字并不是一个缩写,而是截取自“Gojira”,日文的哥斯拉发音。 官网

1
2
3
4
5
6
7
8
9
FROM cptactionhank/atlassian-jira-software:8.1.0

USER root

# 将代理破解包加入容器
COPY "atlassian-agent.jar" /opt/atlassian/jira/

# 设置启动加载代理包
RUN echo 'export CATALINA_OPTS="-javaagent:/opt/atlassian/jira/atlassian-agent.jar ${CATALINA_OPTS}"' >> /opt/atlassian/jira/bin/setenv.sh

目录

1
2
3
- JIRA
--Dockerfile
--atlassian-agent.jar

打包运行docker

1
2
3
4
docker build -t dakewe/jira:v8.1.0 .

docker run --name jira --detach --publish 8080:8080 dakewe/jira:v8.1.0

软件运行

Qm3O8C

LACZDG

破解

1
2
3
4
5
6
7
8
9
java -jar atlassian-agent.jar -d -m 82607314@qq.com -n dakewe -p jira -o http://localhost:8090 -s B5WY-IHN3-GITJ-FAA7

// 或
docker exec confluence java -jar /opt/atlassian/confluence/atlassian-agent.jar \
-p jira \
-m 82607314@qq.com \
-n 82607314@qq.com \
-o http://localhost:8090 \
-s BNZF-ITXS-8W2V-IWQP

confluence

Atlassian Confluence(简称Confluence)是一个专业的wiki程序。它是一个知识管理的工具,通过它可以实现团队成员之间的协作和知识共享。官网

1
2
3
4
5
6
7
8
9
FROM cptactionhank/atlassian-confluence:7.9.3

USER root

# 将代理破解包加入容器
COPY "atlassian-agent.jar" /opt/atlassian/confluence/

# 设置启动加载代理包
RUN echo 'export CATALINA_OPTS="-javaagent:/opt/atlassian/confluence/atlassian-agent.jar ${CATALINA_OPTS}"' >> /opt/atlassian/confluence/bin/setenv.sh

目录

1
2
3
- Confluence
--Dockerfile
--atlassian-agent.jar

打包运行

1
2
3
docker build -f Dockerfile -t dakewe/confluence:7.9.3 .

docker run --name confluence --detach --publish 8081:8090 dakewe/confluence:7.9.3

破解

1
2
3
4
5
6
7
8
9
java -jar atlassian-agent.jar -d -m 82607314@qq.com -n DAKEWE -p conf -o http://103.39.231.195 -s BJRF-N4SL-YE98-QPJA

// 或
docker exec confluence java -jar /opt/atlassian/confluence/atlassian-agent.jar \
-p conf \
-m 82607314@qq.com \
-n 82607314@qq.com \
-o http://localhost:8090 \
-s BNZF-ITXS-8W2V-IWQP

mysql示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
--创建jira数据库及用户
create database jiradb character set 'UTF8';
create user jirauser identified by 'jira';
grant all privileges on *.* to 'jirauser'@'%' identified by 'jira' with grant option;
grant all privileges on *.* to 'jirauser'@'localhost' identified by 'jira' with grant option;
flush privileges;


--创建confluence数据库及用户
create database confdb character set 'UTF8';
create user confuser identified by 'conf';
grant all privileges on *.* to 'confuser'@'%' identified by 'conf' with grant option;
grant all privileges on *.* to 'confuser'@'localhost' identified by 'conf' with grant option;
flush privileges;

-- 设置confdb事务级别
show variables like 'tx%';
set session transaction isolation level read committed;
show variables like 'tx%';

相关下载

破解文件

背景

上一次我们使用filebeat进行数据采集 filebeat与logstash实践,传输到logstash,并使用的logstash进行数据的过滤处理,本着能减少一个环节就是一个环节,这次我们就省去logstash这个环节,使用filebeats的pipeline的功能来做一次数据处理,并直接入库到es.

本次,我们依然使用的上一次的示例日志数据,整个过程如下

  • 通过kibana的开发工具,在es中进行添加 pipeline,当然你也可以使用es的api进行添加。
  • 在filebeat中定义使用的pipeline名称。
  • 当filebeat采集到数据后,直接发往es,es进行入库前的pipeline处理。

实践

filebeat 定义输出 elasticsearch和pipeline

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
filebeat.inputs:
- type: filestream
enabled: true
paths:
- /usr/share/filebeat/logfiles/*.log
include_lines: ['A large volume of broadcast packets has been detected']
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 10s
setup.ilm.enabled: false
setup.template.name: "vpn-log"
setup.template.pattern: "vpn-log-*"
output.elasticsearch:
index: "vpn-log-%{+yyyy-MM-dd}"
hosts: ["10.8.99.34:9200"]
username: "elastic"
password: "XXX"
pipeline: "vpn_log_pipeline"
processors:
- drop_fields:
fields:
- agent.ephemeral_id
- agent.hostname
- agent.id
- agent.type
- agent.version
- ecs.version
- input.type
- log.offset
- version

在kibana的开发工具进行创建pipeline处理

先测试一下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
// 模拟测试pipeline
POST _ingest/pipeline/_simulate
{
"pipeline": {
"description" : "vpn_log_pipeline",
"processors" : [
{
"grok" : {
"field" : "message",
"patterns" : [
"""%{TIMESTAMP_ISO8601:error_time} \[HUB "%{NOTSPACE:hub}"\] Session "%{NOTSPACE:session}": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is %{NOTSPACE:mac_address}, the source IP address is %{IPV4:source_ip}, the destination IP address is %{IPV4:destination_ip}. The number of broadcast packets is equal to or larger than %{NUMBER:items_per_second} items per 1 second """
],
"ignore_failure" : true
},
"convert" : {
"field" : "items_per_second",
"type" : "integer",
"ignore_failure" : true
}
},
{
"date" : {
"field" : "error_time",
"target_field" : "@timestamp",
"formats" : [
"yyyy-MM-dd HH:mm:ss.SSS"
],
"timezone" : "Asia/Shanghai"
}
}
]
},
"docs": [
{
"_source": {
"message": """2022-01-17 14:19:07.047 [HUB "hub_dkwbj"] Session "SID-BRIDGE-20": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 70-B5-E8-2F-C9-5C, the source IP address is 192.168.9.134, the destination IP address is 0.0.0.0. The number of broadcast packets is equal to or larger than 34 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect)."""
}
}
]
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// 增加pipeline
PUT _ingest/pipeline/vpn_log_pipeline
{
"description": "vpn_log_pipeline",
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIMESTAMP_ISO8601:error_time} \\[HUB \"%{NOTSPACE:hub}\"\\] Session \"%{NOTSPACE:session}\": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is %{NOTSPACE:mac_address}, the source IP address is %{IP:source_ip}, the destination IP address is %{IP:destination_ip}. The number of broadcast packets is equal to or larger than %{NUMBER:items_per_second} items per 1 second "
],
"ignore_failure": true
},
"convert": {
"field": "items_per_second",
"type": "integer",
"ignore_failure": true
}
},
{
"date": {
"field": "error_time",
"target_field": "@timestamp",
"formats": [
"yyyy-MM-dd HH:mm:ss.SSS"
],
"timezone": "Asia/Shanghai"
}
}
]
}

// 其他操作
GET _ingest/pipeline/vpn_log_pipeline
DELETE _ingest/pipeline/vpn_log_pipeline
1
2
3
4
5
6
7
8
9
10
11
//模拟测试添加的pipeline
GET _ingest/pipeline/vpn_log_pipeline/_simulate
{
"docs": [
{
"_source": {
"message": """2022-01-17 14:19:07.047 [HUB "hub_dkwbj"] Session "SID-BRIDGE-20": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 70-B5-E8-2F-C9-5C, the source IP address is 192.168.9.134, the destination IP address is 0.0.0.0. The number of broadcast packets is equal to or larger than 34 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect)."""
}
}
]
}

开启filebeat,日志文件将传输到es并通过pipeline进行处理,其中pipeline中使用了processors的grok,与logstash相似。

专题目录

ElasticStack-安装篇
ElasticStack-elasticsearch篇
ElasticStack-logstash篇
elasticSearch-mapping相关
elasticSearch-分词器介绍
elasticSearch-分词器实践笔记
elasticSearch-同义词分词器自定义实践
docker-elk集群实践
filebeat与logstash实践
filebeat之pipeline实践
Elasticsearch 7.x 白金级 破解实践
elk的告警调研与实践

背景

开发时,碰到互斥问题,需要保证数据的一致性,避免重复性操作,我们就需要用锁来解决,简单的讲锁其实就是为了解决并发所带来的问题。

常见问题可查看从线上事故看mongodb事务ACID强弱

举个例子:库存,并发情况下,我们通过ab模拟,一次10个请求,假设我们下单去核销库存数量,先查询订单数量,后扣除库存,那么在10个请求并发下,我们就可能获取到了错误的库存数量。这在事务中我们认为是脏读或幻读。

数据库的并发控制机制不外乎就两种情况:

  • 悲观锁:假定会发生并发冲突,屏蔽一切可能违反数据完整性的操作。

悲观锁假定其他用户企图访问或者改变你正在访问、更改的对象的概率是很高的,因此在悲观锁的环境中,在你开始改变此对象之前就将该对象锁住,并且直到你提交了所作的更改之后才释放锁。悲观的缺陷是不论是页锁还是行锁,加锁的时间可能会很长,这样可能会长时间的限制其他用户的访问,也就是说悲观锁的并发访问性不好。

  • 乐观锁:假设不会发生并发冲突,只在提交操作时检查是否违反数据完整性,乐观锁不能解决脏读和多读的问题。

乐观锁则认为其他用户企图改变你正在更改的对象的概率是很小的,因此乐观锁直到你准备提交所作的更改时才将对象锁住,当你读取以及改变该对象时并不加锁。可见乐观锁加锁的时间要比悲观锁短,乐观锁可以用较大的锁粒度获得较好的并发访问性能。但是如果第二个用户恰好在第一个用户提交更改之前读取了该对象,那么当他完成了自己的更改进行提交时,数据库就会发现该对象已经变化了,这样,第二个用户不得不重新读取该对象并作出更改。这说明在乐观锁环境中,会增加并发用户读取对象的次数。

我们经常用的lock就是一种悲观锁。

乐观锁

todo

悲观锁

基于redis的分布式锁

关于redis的分布式锁,redis官方引出了一个算法,命名为redlock。
同时,提供了各类的实现可供使用,例如Redlock-rb for Ruby、Redlock-py for Python、Redisson for Java等。
因此,深入了解Redis分布锁的运用同时分析下node-redlock

背景

我们在之前有用过ELK,并详细使用过logstash,作为数据从mysql到es的cdc的传输工具。可查看 ElasticStack-logstash篇
这一次让我们来通过filebeat采集,logstash过滤处理一下日志文件,通过采集日志文件进行数据提取,入库到mongodb

仅采集含有 A large volume of broadcast packets has been detected 内容的数据,并将所需要的数据提取出来入库

示例数据:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
2021-12-01 00:00:07.115 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 50-9A-4C-27-F9-D3, the source IP address is fe80::e8d3:8281:e69e:afda, the destination IP address is ff02::1:3. The number of broadcast packets is equal to or larger than 32 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:00:07.115 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 50-9A-4C-27-F9-D3, the source IP address is 192.168.9.103, the destination IP address is 224.0.0.252. The number of broadcast packets is equal to or larger than 32 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:01:34.923 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 50-9A-4C-27-F9-D3, the source IP address is 192.168.9.103, the destination IP address is 224.0.0.251. The number of broadcast packets is equal to or larger than 40 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:01:34.923 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 50-9A-4C-27-F9-D3, the source IP address is fe80::e8d3:8281:e69e:afda, the destination IP address is ff02::fb. The number of broadcast packets is equal to or larger than 40 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:03:48.133 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 48-4D-7E-BE-B0-87, the source IP address is 192.168.9.21, the destination IP address is 224.0.0.251. The number of broadcast packets is equal to or larger than 52 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:03:48.133 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 48-4D-7E-BE-B0-87, the source IP address is fe80::c129:65df:e7de:f745, the destination IP address is ff02::fb. The number of broadcast packets is equal to or larger than 52 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:03:48.133 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 50-9A-4C-27-F9-D3, the source IP address is 192.168.9.103, the destination IP address is 224.0.0.251. The number of broadcast packets is equal to or larger than 60 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:03:48.133 [HUB "hub_dkwbj"] Session "SID-BRIDGE-5": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is 50-9A-4C-27-F9-D3, the source IP address is fe80::e8d3:8281:e69e:afda, the destination IP address is ff02::fb. The number of broadcast packets is equal to or larger than 60 items per 1 second (note this information is the result of mechanical analysis of part of the packets and could be incorrect).
2021-12-01 00:11:07.141 On the TCP Listener (Port 5555), a Client (IP address 167.248.133.58, Host name "scanner-09.ch1.censys-scanner.com", Port number 40418) has connected.
2021-12-01 00:11:07.141 For the client (IP address: 167.248.133.58, host name: "scanner-09.ch1.censys-scanner.com", port number: 40418), connection "CID-8671" has been created.
2021-12-01 00:11:08.058 Connection "CID-8671" has been terminated.
2021-12-01 00:11:08.058 The connection with the client (IP address 167.248.133.58, Port number 40418) has been disconnected.
2021-12-01 00:11:08.289 On the TCP Listener (Port 5555), a Client (IP address 167.248.133.58, Host name "scanner-09.ch1.censys-scanner.com", Port number 34038) has connected.
2021-12-01 00:11:08.289 For the client (IP address: 167.248.133.58, host name: "scanner-09.ch1.censys-scanner.com", port number: 34038), connection "CID-8672" has been created.
2021-12-01 00:11:08.531 SSL communication for connection "CID-8672" has been started. The encryption algorithm name is "AES128-SHA".
2021-12-01 00:11:10.011 Connection "CID-8672" terminated by the cause "A client which is non-SoftEther VPN software has connected to the port." (code 5).

创建docker网络

1
docker network create --driver bridge leiqin

制作包含logstash-output-mongodb的logstash镜像包

创建安装了logstash-output-mongodb的镜像包dockerfile文件
logstash.dockerfile文件

1
2
FROM docker.elastic.co/logstash/logstash:7.13.0
RUN logstash-plugin install --version=3.1.5 logstash-output-mongodb

打包自己的logstash镜像

1
docker build -f logstash.dockerfile -t dakewe/logstash:1.0 .

docker-compose定义

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: '3.0'
services:

filebeat:
image: docker.elastic.co/beats/filebeat:7.13.0
container_name: filebeat
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/data:/usr/share/filebeat/data
- ./filebeat/logs:/usr/share/filebeat/logs
- ./filebeat/logfiles:/usr/share/filebeat/logfiles
environment:
LS_JAVA_OPTS: "-Xmx1024m -Xms1024m"
networks:
- leiqin

logstash:
image: dakewe/logstash:1.0
container_name: logstash
volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./logstash/pipeline:/usr/share/logstash/pipeline
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx1024m -Xms1024m"
networks:
- leiqin

networks:
leiqin:
external: true

安装logstash-output-mongodb(使用官方镜像包情况)

tip: 如果是安装的官方的镜像包,安装后,请进入容器内安装logstash-output-mongodb

不要安装3.1.6新版本,请指定3.1.5版本。具体的坑详见:Github作者回复

1
bin/logstash-plugin install --version=3.1.5 logstash-output-mongodb

fitebeat 定义

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

filebeat.inputs:
- type: filestream
enabled: true
paths:
- /usr/share/filebeat/logfiles/*.log
include_lines: ['A large volume of broadcast packets has been detected']

filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

output.logstash:
# The Logstash hosts
hosts: ["logstash:5044"]

logstash输出打印到终端

我们先让filebeat的文件到logstash直接输出处理

1
2
3
4
5
6
7
8
9
10
11
input {
beats {
port => 5044
}
}

output {
stdout {
codec => rubydebug
}
}

logstash 输出到mongodb

在logstash过滤,入库到mongodb

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
input {
beats {
port => 5044
}
}

filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:time} \[HUB \"%{NOTSPACE:hub}\"\] Session \"%{NOTSPACE:session}\": A large volume of broadcast packets has been detected. There are cases where packets are discarded based on the policy. The source MAC address is %{NOTSPACE:mac_address}, the source IP address is %{IP:source_ip}, the destination IP address is %{IP:destination_ip}. The number of broadcast packets is equal to or larger than %{NUMBER:items_per_second} items per 1 second "}
}

grok {
match => { "[log][file][path]" => ".*(\\|\/).*(\\|\/)(?<file_name>.*).*"}
}

date {
match => [ "time","ISO8601"]
timezone => "Asia/Chongqing"
target => "created_at"
}

mutate{
remove_field => ["host"]
remove_field => ["agent"]
remove_field => ["input"]
remove_field => ["tags"]
remove_field => ["ecs"]
remove_field => ["time"]
remove_field => ["log"]
}
}

output {
stdout {
codec => rubydebug
}

mongodb {
collection => "vpn_log"
generateId => "true"
database => "log"
uri => "mongodb://localhost:27017"
}
}

总结

较为简单,如果配合elk,效果更佳。

专题目录

ElasticStack-安装篇
ElasticStack-elasticsearch篇
ElasticStack-logstash篇
elasticSearch-mapping相关
elasticSearch-分词器介绍
elasticSearch-分词器实践笔记
elasticSearch-同义词分词器自定义实践
docker-elk集群实践
filebeat与logstash实践
filebeat之pipeline实践
Elasticsearch 7.x 白金级 破解实践
elk的告警调研与实践

背景

最近因公司的CRM项目用的mongodb,浏览了所有旧代码,看到特别多多表原子操作的问题。借此机会就来看看mongodb4.0后出来的副本集事务的能力。

问题还原

多表操作场景

涉及异常回滚的原子性问题,

1
2
3
4
5
// 创建流水号(流水号表自增)

throw Error -> 异常中断

// 创建订单

问题:当异常中断时候,实际流水号表已经成功自增1,但是创建订单失败。当下一次进行操作的时候,实际流水号缺失了1位的订单流水号。

这里其实还隐藏着一个问题,高并发未加锁,会导致流水号异常。

这就是事务的原子性,实际应该当异常中断,启动事务回滚,回滚流水号的自增创建。

问题还原2

redis版本异常

1
2
3
4
5
6
7

// 订单入库
await ctx.model.Delivery.VirtualOrder.create(orders)

// 发送通知 -> 异常出现redis报错
const jobs = await ctx.app.noticeQueue.addBulk(datas)

问题:当异常redis中断导致发送消息失败,应该启动事务回滚

正确操作:启动事务,进行异常回滚,如下

app/extend/context.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
module.exports = {
async getSession(
opt = {
readPreference: { mode: 'primary' },
}
) {
const { mongoose } = this.app
const session = await mongoose.startSession(opt)
await session.startTransaction({
readConcern: { level: 'majority' },
writeConcern: { w: 'majority' },
})
return session
},
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14

const { ctx } = this
const session = await this.ctx.getSession()
try {
// 订单入库
await ctx.model.Delivery.VirtualOrder.create(orders, { session })
// 发送通知 -> 异常出现redis报错
const jobs = await ctx.app.noticeQueue.addBulk(datas)
} catch (error) {
console.log('开始回滚')
await session.abortTransaction()
session.endSession()
console.log(error)
}

错误或不建议的操作:人工删除

1
2
3
4
5
6
7
8
9
10
11
try {
// 创建
// 创建
// 创建
// 创建
// 创建
// 创建
} catch{
// 人工删除创建
}

mongodb副本集环境搭建

bash 脚本

1
2
3
4
5
#!/bin/bash
# 生成 keyfile
mkdir ./keyfile
openssl rand -base64 745 > ./keyfile/mongoReplSet-keyfile
chmod 600 ./keyfile/mongoReplSet-keyfile

出现如下错误:

1
configsvr01    | {"t":{"$date":"2021-05-29T17:38:02.750+00:00"},"s":"I",  "c":"ACCESS",   "id":20254,   "ctx":"main","msg":"Read security file failed","attr":{"error":{"code":30,"codeName":"InvalidPath","errmsg":"error opening file: /data/mongo.key: bad file"}}}

解决办法: 变更mongoReplSet-keyfile 所属用户
chown 999 mongoReplSet-keyfile

启动 Docker

docker-compose -f docker-compose.yml up -d
docker-compose

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
version: '3.1'
services:
mongo1:
image: mongo
hostname: mongo1
container_name: mongo1
restart: always
ports:
- 27011:27017
volumes:
- ./keyfile:/data/keyfile
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: 123123
command: mongod --auth --keyFile /data/keyfile/mongoReplSet-keyfile --bind_ip_all --replSet rs0

mongo2:
image: mongo
hostname: mongo2
container_name: mongo2
restart: always
ports:
- 27012:27017
volumes:
- ./keyfile:/data/keyfile
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: 123123
command: mongod --auth --keyFile /data/keyfile/mongoReplSet-keyfile --bind_ip_all --replSet rs0

mongo3:
image: mongo
hostname: mongo3
container_name: mongo3
restart: always
ports:
- 27013:27017
volumes:
- ./keyfile:/data/keyfile
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: 123123
command: mongod --auth --keyFile /data/keyfile/mongoReplSet-keyfile --bind_ip_all --replSet rs0
容器名 ip 备注
mongo1 10.8.99.44:27011 Primary(主, 读写)
mongo2 10.8.99.44:27012 Secondary1(从,读)
mongo3 10.8.99.44:27013 Secondary2(从, 读)

配置副本集

1
2
3
4
5
6
7
8
9
10
11
12
13
docker exec -it <container> mongo
mongo -u root -p 123123
use admin
rs.initiate(
{
_id : 'rs0',
members: [
{ _id : 0, host : "10.8.99.44:27011", priority:3 },
{ _id : 1, host : "10.8.99.44:27012", priority:3 },
{ _id : 2, host : "10.8.99.44:27013", priority:3 }
]
}
)

相关副本集命令

重置

1
2
3
4
5
6
7
8
9
10
rs.reconfig(
{
_id : 'rs0',
members: [
{ _id : 0, host : "10.8.99.44:27011", priority:3 },
{ _id : 1, host : "10.8.99.44:27012", priority:1 },
{ _id : 2, host : "10.8.99.44:27013", priority:1 }
]
}
)
1
rs.config()

强制修改副本集host

1
2
3
4
5
6
7
8
9
10
11
12
13
rs.reconfig(
{
_id : 'rs0',
members: [
{ _id : 0, host : "192.168.199.98:27011", priority:3 },
{ _id : 1, host : "192.168.199.98:27012", priority:1 },
{ _id : 2, host : "192.168.199.98:27013", priority:1 }
]
},
{
"force":true
}
)

修改优先级

必须在primary节点上执行此操作,副本集中通过设置priority的值来决定优先权的大小。这个值的范围是0–100,值越大,优先权越高. 如果值是0,那么不能成为primay。适用于做冷备。

1
2
3
PRIMARY> config=rs.conf()
PRIMARY>config.members[0].priority = 6
PRIMARY> rs.reconfig(config)
1
2
3
4
5
6
7
8
查看副本集状态
rs.status()
rs.isMaster()
查看副本集配置
rs.conf()
查看Secondary同步状态
rs.printReplicationInfo()
rs.printSecondaryReplicationInfo()

事务实践

如何通过mongoose连接副本集

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
config.mongoose = {
url: 'mongodb://10.8.99.44:27011,10.8.99.44:27012,10.8.99.44:27013/log',
options: {
auth: {
user: 'admin',
password: '123123',
},
replicaSet: 'rs0', // 副本集
readPreference: 'secondaryPreferred', // 读哪个副本集 从哪个副本集读数据
readConcern: { level: 'majority' }, // 读的策略 解决脏读 决定这个节点上的数据哪些是可读的,类似于关系数据库的隔离级别 读取在大多数节点上提交完成的数据;
// write concern 当配置majority时,大多数节点为2,Primary感知到两个节点被写入完成时就代表成功写入
writeConcern: {
w: 'majority',
// j,该参数表示是否写操作要进行journal持久化之后才向用户确认;
// {j: true} 要求primary写操作进行了journal持久化之后才向用户确认;
// {j: false} 要求写操作已经在journal缓存中即可向用户确认;journal后续会将持久化到磁盘,默认是100ms;
// j: true,
j: true,
wtimeout: 1000,
},
keepAlive: true,
keepAliveInitialDelay: 300000,
useNewUrlParser: true,
useFindAndModify: false,
useCreateIndex: true,
useUnifiedTopology: true,
serverSelectionTimeoutMS: 30000,
socketTimeoutMS: 45000,
},
}

readPreference

默认情况下,读写都指定到副本集中的 Primary 节点。对于读多写少的情况我们可以使用读写分离来减轻 DB 的压力。MongoDB 驱动程序支持五种读取首选项(Read Preference) 模式。

Read Preference 描述
primary 默认模式。 所有操作都从当前副本集 primary 读取。
primaryPreferred 在大多数情况下,从 primary 读取,但如果不可用,则从 secondary 读取。
secondary 所有操作都从 secondary 中读取。
secondaryPreferred 在大多数情况下,从 secondary 读取,但如果没有 secondary 可用,则从 primary 读取。
nearest 无论成员的类型如何,操作都从具有最小网络延迟的副本集成员读取

mongoose事务回滚

app/extend/context.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
module.exports = {
async getSession(
opt = {
readPreference: { mode: 'primary' },
}
) {
const { mongoose } = this.app
const session = await mongoose.startSession(opt)
await session.startTransaction({
readConcern: { level: 'majority' },
writeConcern: { w: 'majority' },
})
return session
},
}

app/service/test.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
async transaction() {
const { ctx } = this
const session = await this.ctx.getSession()
const db = this.app.mongoose.connection
try {
const res1 = await ctx.model.VpnLog.create(
[
{
mac_address: 'test',
session: 'test',
source_ip: 'test',
destination_ip: 'test',
hub: 'test',
items_per_second: 111,
created_at: new Date(),
},
],
{ session }
)
throw new Error('出错') -> 出错,如其他程序的操作错误
const res2 = await ctx.model.VpnLog.create(
[
{
mac_address: 'test2',
session: 'test2',
source_ip: 'test2',
destination_ip: 'test2',
hub: 'test2',
items_per_second: 222,
created_at: new Date(),
},
],
{ session }
)
await session.commitTransaction()
session.endSession()
return {
res1,
res2,
}
} catch (error) {
console.log('开始回滚')
await session.abortTransaction()
session.endSession()
console.log(error)
}
}

可以看看res1 是不是回滚创建,在数据库中找不到了

相关链接

https://www.jianshu.com/p/8d7dea5c067b
https://www.zhangshengrong.com/p/Ap1ZeQ2PX0/

前言

众所周知,mongoose的日期格式是ISODate,也就是使用的utc时间,举个栗子:2020-12-11T16:00:00.000Z ,T表示分隔符,Z表示的是UTC。

UTC:世界标准时间,在标准时间上加上8小时,即东八区时间,也就是北京时间。

咱举个例子:

北京时间:2020-12-12 00:00:00对应的国际标准时间格式为:2020-12-11T16:00:00.000Z

当我们的前端页面通过接口拿到我的utc时间后,一般通过new Date(时间),就能快速的转换成当地的时间。

这些周知的我就不再多举例了。

笔记原因

做这个笔记前,我遇到了时间进入到数据库没有准确的转换为utc,于是,好奇心驱使,我们开启mongoose的debug模式,来看看是什么实际mongoose到原生层的实际过程。

1
2
// 开启mongoose调试
this.app.mongoose.set('debug', true)

举例,我有一个schema,里面有个updateDate,我们先来看看不同的日期插入数据库时候,实际的表现。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
module.exports = (app) => {
const mongoose = app.mongoose
const Schema = mongoose.Schema

const FileSchema = new Schema(
{
fileName: { type: String }, // 文件名
uploadDate: { type: Date }, // 上传创建日期
creatorName: { type: String }, // 创建者姓名
},
{
collection: 'sys_files',
}
)

return mongoose.model('File', FileSchema)
}
  • 方式一:使用new Date 插入
    1
    2
    3
    4
    5
    const deliver = await ctx.model.Delivery.File.create({
    fileName: '文件名',
    uploadDate: new Date("2021-12-12 00:00:00"),
    creatorName: "张三",
    })

3suLxw

通过debug日志:

1
Mongoose: sys_files.insertOne({ isDel: false, _id: ObjectId("61b5599cba1fcfeeb79c57cd"), fileName: '文件名', uploadDate: new Date("Sat, 11 Dec 2021 16:00:00 GMT"), creatorName: '张三', __v: 0}, { session: null })

结论:`

我们发现mongoose的ORM层的create实际调用了insertOne,插入的本地时间new Date("2021-12-12 00:00:00")2021-12-11T16:00:00.000Z)到达原生层变成了Sat, 11 Dec 2021 16:00:00 GMT并在进行了一次new Date(),

所以整个orm层到mongodb原生层的过程是这样的:
日期传入-> (ORM)转换为GMT零时区-> (ORM)new Date()转为ISODate -> 入库

  • ->new Date("2021-12-12 00:00:00")传入 -> 2021-12-11T16:00:00.000Z
  • ->(ORM)转换为GMT零时区 Sat, 11 Dec 2021 16:00:00 GMT
  • ->(ORM)new Date()转为ISODate new Date("Sat, 11 Dec 2021 16:00:00 GMT") -> 2020-12-11T16:00:00.000Z
  • -> 入库

我们在拿一个字符串时间和moment时间对象来校验是不是这个过程:日期传入-> (ORM)转换为GMT零时区-> (ORM)new Date()转为ISODate -> 入库

  • 方式二:使用String 字符串插入
    1
    2
    3
    4
    5
    6
    7

    const deliver = await ctx.model.Delivery.File.create({
    fileName: '文件名',
    uploadDate: "2021-12-12 00:00:00",
    creatorName: "李四",
    })

FXDRVZ

结论:

  • ->2021-12-12 00:00:00传入

  • ->(ORM)转换为GMT零时区 Sat, 11 Dec 2021 16:00:00 GMT

  • ->(ORM)new Date()转为ISODatenew Date("Sat, 11 Dec 2021 16:00:00 GMT") -> 2020-12-11T16:00:00.000Z

  • -> 入库

  • 方式三:使用moment 插入

    1
    2
    3
    4
    5
    6

    const deliver = await ctx.model.Delivery.File.create({
    fileName: '文件名',
    uploadDate: moment("2021-12-12 00:00:00"),
    creatorName: '王五',
    })

    k6gMeQ

结论:

  • ->moment("2021-12-12 00:00:00")传入 -> Moment<2021-12-12T00:00:00+08:00>
  • ->(ORM)转换为GMT零时区 Sat, 11 Dec 2021 16:00:00 GMT
  • ->(ORM)new Date()转为ISODatenew Date("Sat, 11 Dec 2021 16:00:00 GMT") -> 2020-12-11T16:00:00.000Z
  • -> 入库

others

我们发现当我们的时间只要是精确到时分秒,进入到mongodb数据库后,都能正确的转换成UTC时间。

那我们来试试 年月日的情况

1
2
3
4
5
const file = await ctx.model.Delivery.File.create({
fileName: '文件名',
uploadDate: new Date('2021-12-12'),
creatorName: '刘九',
})

L8L1y4

结论:`

  • ->new Date('2021-12-12')传入 -> 2021-12-12T00:00:00.000Z注意此处年月日时间转换为UTC时间与上面带时分秒的差异
  • ->(ORM)转换为GMT零时区 Sat, 11 Dec 2021 00:00:00 GMT
  • ->(ORM)new Date()转为ISODatenew Date("Sat, 11 Dec 2021 00:00:00 GMT") -> 2020-12-12T00:00:00.000Z
  • -> 入库

这也就解释了为什么本人在项目中传入2021-12-12 的日期最终却变成了utc 2020-12-12T00:00:00.000Z,也就是为什么new Date()本地时间会多出来8个小时的原因了。

总结

mongoose这个ORM实际做了一步强制new Date()转换为utc时间。所以无论传入什么本地时间,都会强制转换mongodb所需要的ISODate时期格式。
所以无论是moment、dayjs等时间库的时间,最后都会被momgoose强制转换为new Date 的UTC时间。与用什么时间库或时间格式并无直接关系。

番外篇

我们来验证下查询的时候,传入的时间是不是也会通过mongoose自动强制new Date
De6b3B
uuLCtR

验证发现流程:

  • -> 传入时间
  • ->(ORM)转换为GMT零时区 Sat, 11 Dec 2021 00:00:00 GMT
  • ->(ORM)new Date()转为ISODatenew Date("Sat, 11 Dec 2021 00:00:00 GMT") -> 2020-12-12T00:00:00.000Z
  • -> 查询

前言

我们在 mongoose关联查询技巧笔记 的关联技巧一:关联引用子查父 中我们发现可以通过lookup进行子查父,那么我们是否有什么快捷方式能在子当中定义一个字段,查询子的时候就能带出父的信息呢?

virtual使用

virtual虚拟值填充-官方文档

schema 设置 toJSON 和toObject可获取

1
2
3
4
5
6
toJSON: {
virtuals: true,
},
toObject: {
virtuals: true,
},

合并字段

1
2
3
4
5
6
7
8
9
10
11
personSchema.virtual('fullName').get(function () {
return this.name.first + ' ' + this.name.last;
});

var Person = mongoose.model('Person', personSchema);

var p = new Person({
name: { first: 'junyao', last: 'hong' }
});

p.name.fullName // junyao hong

virtual关联查询(子查父)

FcvdNf

1
2
3
4
5
6
BookSchema.virtual('author', {
ref: 'authors',
localField: '_id',
foreignField: 'books',
justOne: true,
})
0%