上次我们试了下标量函数flink sql 自定义udf实践之标量函数
这次我们来试一下表值函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
package org.example;
import org.apache.flink.table.annotation.DataTypeHint;
import org.apache.flink.table.annotation.FunctionHint;
import org.apache.flink.table.functions.TableFunction;
import org.apache.flink.types.Row;

/**
* classname SplitFunction
* description 表值函数
*/
@FunctionHint(output = @DataTypeHint("Row<word String, length INT>"))
public class SplitFunction extends TableFunction<Row> {
public void eval(String str) {
collect(Row.of(str, str.length()));
}
}

背景

方案待验证

业务上需要对一张含有商品ID的表进行打宽,把商品更多属性打宽到es供上下游es查询,遇到如下问题

  • 1、商品表将作为1张维表,而这张维表数据量达到了200W+,对储存、计算(内存)存在压力

  • 2、商品表在mongodb,而生态上没有source的connecter能支持到lookup join

  • 3、首次发现mongodb Temporal join 仅支持主键_id

以前基本都是mysql cdc,没太关注mongo,但这几个问题的出现,让我首次关注到mongodb在flink生态的支持程度。那么我们就换一个思路,引入flink table store湖仓,来解决ODS到DWD再到ADS这些问题:

  • 1、使得商品表作为一张可复用的维表
  • 2、解决商品表能lookup join,且不仅仅支持主键join,还要能支持非主键join
  • 3、降低对储存、计算(内存)存在压力

DjJ3MP

模拟场景:假设我的订单明细表(order_item)有product_id,一般产品的更多产品参数信息是不会都存到订单上,我需要将我的订单明细表通过产品表(product)这个维表进行打宽,然后写到es,供上下游进行根据商品信息搜索订单,或者进行聚合统计商品订购top10.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
-- 创建并使用 FTS Catalog
CREATE CATALOG `product_catalog` WITH (
'type' = 'table-store',
'warehouse' = '/tmp/table-store-101'
);

USE CATALOG `product_catalog`;


-- ODS table schema
-- 注意在 FTS Catalog 下,创建使用其它连接器的表时,需要将表声明为临时表
-- 产品源表ods
CREATE TEMPORARY TABLE ods_product (
_id STRING,
created TIMESTAMP_LTZ(3),
mfrId STRING,
mfrName STRING,
name STRING,
ras STRING,
sn STRING,
spec STRING,
status BOOLEAN,
taxrate INT,
unit STRING,
updated TIMESTAMP_LTZ(3),
price DECIMAL(10, 5),
taxcode STRING,
clone STRING,
lastOrderAt TIMESTAMP_LTZ(3),
manual STRING,
pn STRING,
cumulativeSales INT,
isDeprecated BOOLEAN,
ship STRING,
storage STRING,
isPublic BOOLEAN,
invtCode STRING,
PRIMARY KEY (_id) NOT ENFORCED
) WITH (
'connector' = 'mongodb-cdc',
'hosts' = 'localhost:27017',
'username' = 'XXX',
'password' = 'XXX',
'database' = 'biocitydb',
'collection' = 'product'
);


-- DWD table schema
-- Create a table in table-store catalog
-- 产品入湖表dwd
CREATE TABLE `dwd_product` (
_id STRING,
created TIMESTAMP_LTZ(3),
mfrId STRING,
mfrName STRING,
name STRING,
ras STRING,
sn STRING,
spec STRING,
status BOOLEAN,
taxrate INT,
unit STRING,
updated TIMESTAMP_LTZ(3),
price DECIMAL(10, 5),
taxcode STRING,
clone STRING,
lastOrderAt TIMESTAMP_LTZ(3),
manual STRING,
pn STRING,
cumulativeSales INT,
isDeprecated BOOLEAN,
ship STRING,
storage STRING,
isPublic BOOLEAN,
invtCode STRING,
PRIMARY KEY (_id) NOT ENFORCED
)


-- ods to dwd
-- 源表入湖
INSERT INTO
dwd_product
select
_id,
created,
mfrId,
mfrName,
name,
ras,
sn,
spec,
status,
taxrate,
unit,
updated,
price,
taxcode,
clone,
lastOrderAt,
manual,
pn,
cumulativeSales,
isDeprecated,
ship,
storage,
isPublic,
invtCode
from
ods_product;

这时候我们在flink table store 创造了一个CDC出来的维表。

任务2:订单明细表ods_order_item 与维表 dwd_product 进行 lookup join

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91

USE CATALOG `product_catalog`;

-- DWD table schema
-- 产品入湖表dwd(维表)
CREATE TABLE `dwd_product` (
_id STRING,
created TIMESTAMP_LTZ(3),
mfrId STRING,
mfrName STRING,
name STRING,
ras STRING,
sn STRING,
spec STRING,
status BOOLEAN,
taxrate INT,
unit STRING,
updated TIMESTAMP_LTZ(3),
price DECIMAL(10, 5),
taxcode STRING,
clone STRING,
lastOrderAt TIMESTAMP_LTZ(3),
manual STRING,
pn STRING,
cumulativeSales INT,
isDeprecated BOOLEAN,
ship STRING,
storage STRING,
isPublic BOOLEAN,
invtCode STRING,
PRIMARY KEY (_id) NOT ENFORCED
)

-- ODS table schema
-- 注意在 FTS Catalog 下,创建使用其它连接器的表时,需要将表声明为临时表
-- 订单源表
CREATE TEMPORARY TABLE `ods_order_item` (
order_id INT,
status INT,
price INT,
order_date DATE,
product_id STRING,
proc_time AS PROCTIME(),
PRIMARY KEY (`id`) NOT ENFORCED
) WITH (
'connector' = 'mongodb-cdc',
'hosts' = 'localhost:27017',
'username' = 'XXX',
'password' = 'XXX',
'database' = 'biocitydb',
'collection' = 'order_item'
);


-- ADS table schema
-- es明细大宽表
CREATE TEMPORARY TABLE ads_es_enrich_order_item (
_id INT,
order_id STRING,
status STRING,
price DECIMAL(15,2),
order_date DATE,
product_id STRING,
mfr_name STRING, -- 打宽产品表
product_name STRING,
ras STRING,
sn STRING,
PRIMARY KEY (_id) NOT ENFORCED
) WITH (
'connector' = 'elasticsearch-7',
'hosts' = 'http://localhost:9200',
'index' = 'es_enrich_order_item'
);


-- 打宽
INSERT INTO
ads_es_enrich_order_item
SELECT
o.order_id,
o.status,
o.price,
o.order_date,
o.product_id,
p.mfrName,
p.name,
p.ras,
p.sn
FROM ods_order_item AS o
JOIN dwd_product FOR SYSTEM_TIME AS OF o.proc_time AS p
ON o.product_id = p._id; -- 这里不仅仅支持_id主键lookup join,也支持非主键

相关链接

flink table store Lookup Join
flink-table-store-101
Flink Table Store 0.3 构建流式数仓最佳实践

example for auto-create and connect table store

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

CREATE TEMPORARY TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED,
cnt BIGINT
) WITH (
'connector'='table-store',
'path'='file:/tmp/word',
'auto-create'='true'
);


CREATE TEMPORARY TABLE word_table (
word STRING
) WITH (
'connector' = 'datagen',
'fields.word.length' = '1'
);

SET 'execution.checkpointing.interval' = '10 s';


INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word;


SET 'sql-client.execution.result-mode' = 'tableau';
SET 'execution.runtime-mode' = 'streaming';
SELECT * FROM word_count;

N4Kf6E

Kubernetes Ingress, Istio Ingressgateway还是 Gateway API?Kubernetes Gateway API

随着Istio 1.16.0的正式发布,也宣布了 Istio 基于Kubernetes Gateway API的实现进入到了 Beta 阶段,这意味着 Istio 中所有南北向(Ingress)流量管理都可以迁移至 Kubernetes Gateway API。未来随着 Kubernetes Gateway API 的发展和成熟,Istio 东西向(Mesh)流量管理 API 也会被其慢慢代替。

不得不说这一新版本的Gatway API 连安装也变了。

前置条件

如果要使用 Kubernetes Gateway API 进行流量管理,需要先满足以下条件:

  • Istio 1.16.0 版本 或更高版本
  • Kubernetes 1.22 或更高版本
  • Gateway API 0.5.0 或更高版本

提前准备

  • 本次试验是在docker desktop 开启 k8s
  • istio版本1.16.1 profile=minimal
  • Kubernetes 1.24.2

任务:通过gateway把域名httpbin.example.com指向k8s服务httpbin,并配置https

核心重点:Gateway和HTTPRoute使用

安装 Gateway API

这次连以前的安装方式都改了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.5.1/standard-install.yaml

customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
namespace/gateway-system created
validatingwebhookconfiguration.admissionregistration.k8s.io/gateway-api-admission created
service/gateway-api-admission-server created
deployment.apps/gateway-api-admission-server created
serviceaccount/gateway-api-admission created
clusterrole.rbac.authorization.k8s.io/gateway-api-admission created
clusterrolebinding.rbac.authorization.k8s.io/gateway-api-admission created
role.rbac.authorization.k8s.io/gateway-api-admission created
rolebinding.rbac.authorization.k8s.io/gateway-api-admission created
job.batch/gateway-api-admission created
job.batch/gateway-api-admission-patch created

安装 Istio

安装 Istio。下载 Istio 1.16.1 版本,并使用 minimal配置文件进行安装,此配置仅会安装控制平面组件

1
2
3
4
5
6
7
8
9
10
11
$ curl -L https://istio.io/downloadIstio | sh -
$ cd istio-1.16.1
$ export PATH=$PWD/bin:$PATH
$ istioctl install --set profile=minimal -y

✔ Istio core installed
✔ Istiod installed
✔ Installation complete
Making this installation the default for injection and validation.

Thank you for installing Istio 1.16.1. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/99uiMML96AmsXY5d6

安装应用程序并配置网关

1.使用 Istio 提供的 samples 模板部署 httpbin 应用程序:

1
2
3
4
$ kubectl apply -f samples/httpbin/httpbin.yaml
serviceaccount/httpbin created
service/httpbin created
deployment.apps/httpbin created

2.创建 istio-ingress namespace 并部署 Gateway 和 HTTPRoute,将访问 httpbin.example.com/get/* 的流量导入 httpbin 服务的 8000 端口:

将上面文件放进去istio的 samples/httpbin/gateway-api/httpbin-gateway.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: gateway
namespace: istio-ingress
spec:
gatewayClassName: istio
listeners:
- name: default
hostname: "*.example.com"
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: http
namespace: default
spec:
parentRefs:
- name: gateway
namespace: istio-ingress
hostnames: ["httpbin.example.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /get
backendRefs:
- name: httpbin
port: 8000
1
2
$ kubectl create namespace istio-ingress
$ kubectl apply -f samples/httpbin/gateway-api/httpbin-gateway.yaml

等待 Gateway 部署完成后设置 Ingress Host 环境变量:

1
2
$ kubectl wait -n istio-ingress --for=condition=ready gateways.gateway.networking.k8s.io gateway
$ export INGRESS_HOST=$(kubectl get gateways.gateway.networking.k8s.io gateway -n istio-ingress -ojsonpath='{.status.addresses[*].value}')

使用 curl 访问 httpbin 服务:

1
2
3
4
$ curl -s -I -H Host:httpbin.example.com "http://$INGRESS_HOST/get"
HTTP/1.1 200 OK
server: istio-envoy
...

测试一下访问没有配置过的路由 headers,会返回 HTTP 404 错误:

1
2
3
$ curl -s -I -H Host:httpbin.example.com "http://$INGRESS_HOST/headers"
HTTP/1.1 404 Not Found
...

更新路由规则,添加 headers 路由配置,并为请求加上自定义 Header:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: http
namespace: default
spec:
parentRefs:
- name: gateway
namespace: istio-ingress
hostnames: ["httpbin.example.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /get
- path:
type: PathPrefix
value: /headers
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: my-added-header
value: added-value
backendRefs:
- name: httpbin
port: 8000

再次访问 headers 路由,可以正常访问,请求头中也被加上了 “My-Added-Header”:

1
2
3
4
5
6
7
$ curl -s -H Host:httpbin.example.com "http://$INGRESS_HOST/headers"
{
"headers": {
"Accept": "*/*",
"Host": "httpbin.example.com",
"My-Added-Header": "added-value",
...

N4Kf6E

Kubernetes Ingress, Istio Ingressgateway还是 Gateway API?Istio Ingressgateway

提前准备

  • 本次试验是在docker desktop 开启 k8s
  • istio版本1.16.1

任务:通过gateway把域名httpbin.example.com指向k8s服务httpbin,并配置https
核心重点:istio的Gateway和VirtualService使用

配置Http

前置条件:

  • Service名为httpbin,端口8000
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: httpbin
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: httpbin
    labels:
    app: httpbin
    service: httpbin
    spec:
    ports:
    - name: http
    port: 8000
    targetPort: 80
    selector:
    app: httpbin
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: httpbin
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: httpbin
    version: v1
    template:
    metadata:
    labels:
    app: httpbin
    version: v1
    spec:
    serviceAccountName: httpbin
    containers:
    - image: docker.io/kennethreitz/httpbin
    imagePullPolicy: IfNotPresent
    name: httpbin
    ports:
    - containerPort: 80

  • 假设拥有域名 httpbin.example.com

接下来,我们需要配置Gateway和VirtualService,如果你用过nginx,我们可以粗略的类比为server。其中Gateway相当于配置了server的基本信息,而VirtualService相当于配置了location。前者主要是配置域名端口等,后者配路由规则。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
namespace: junyao
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.example.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "httpbin.example.com" ## 需与上面hosts域名对应
gateways:
- httpbin-gateway ## 需与上面metadata.name对应
http:
- match:
- uri:
prefix: /status ## 暴露路由
- uri:
prefix: /delay ## 暴露路由 (可以看到/headers和/get的路由暂时没有暴露)
route:
- destination:
port:
number: 8000
host: httpbin ## service

已为 httpbin 服务创建了Virtual Service配置,包含两个路由规则,允许流量流向路径 /status 和 /delay。
Gateways 列表规约了哪些请求允许通 httpbin-gateway 网关。 所有其他外部请求均被拒绝并返回 404 响应。

访问服务

  • 通过NodePort访问
    1
    kubectl patch service istio-ingressgateway -n istio-system -p '{"spec":{"type":"NodePort"}}'
    1
    2
    查看80绑定的端口为30984
    kubectl -n istio-system get service istio-ingressgateway
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    ## 因为VirtualService 有匹配路由/status,所以能正常访问
    curl -s -I -HHost:httpbin.example.com "http://localhost:30984/status/200"
    HTTP/1.1 200 OK
    server: istio-envoy
    date: Wed, 28 Dec 2022 03:18:22 GMT
    content-type: text/html; charset=utf-8
    access-control-allow-origin: *
    access-control-allow-credentials: true
    content-length: 0
    x-envoy-upstream-service-time: 3
    1
    2
    3
    4
    5
    6
    7
    ## 因为VirtualService 没有匹配路由/headers,所以能提示404 Not Found
    curl -s -I -HHost:httpbin.example.com "http://localhost:30984/headers"
    HTTP/1.1 404 Not Found
    date: Wed, 28 Dec 2022 03:18:34 GMT
    server: istio-envoy
    transfer-encoding: chunked

配置Https

现在Https基本上已经是标配,这里也少不了,我们需要做三件事:

  • 创建一个Kubernetes sceret对象,用于保存服务器的证书和私钥。具体说来就是使用 kubectl 命令在命名空间 istio-system 中创建一个 secret 对象,命名为istio-ingressgateway-certs。Istio 网关会自动载入这个 secret。
    为了服务多个域名,我使用了Opaque类型,命名规则使用了{domain}.crt和{domain}.key。需要注意的是,对应的内容需要使用base64编码!

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
    name: istio-ingressgateway-certs
    namespace: istio-system
    data:
    httpbin.example.com.crt: {your-crt-content}
    httpbin.example.com.key: {your-key-content}
    another.example.com.crt: {your-crt-content}
    another.example.com.key: {your-key-content}
  • 把证书和私钥放于目录/etc/istio/ingressgateway-certs,需要注意的是,这里指的是宿主机器(集群的Master),又或者是你执行kubectl命令的机器,因为我是在Master执行的命令,非本机上没验证过。
    把证书和私钥放于目录/etc/istio/ingressgateway-certs下,命名规则和第一步一致,如下:

    注意,这里只需要把文件原封不动的放进来,不需要base64编码

    1
    2
    3
    4
    5
    .
    ├── httpbin.example.com.crt
    ├── httpbin.example.com.key
    ├── another.example.com.crt
    └── another.example.com.key
  • 在Gateway上增加对https的配置声明。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
    name: httpbin-gateway
    spec:
    selector:
    istio: ingressgateway # use Istio default gateway implementation
    servers:
    - port:
    number: 80
    name: http-httpbin
    protocol: HTTP
    hosts:
    - "httpbin.example.com"
    # 以下是新增内容
    tls:
    httpsRedirect: true # 强制跳转至https协议
    - port:
    number: 443
    name: https-httpbin
    protocol: HTTPS
    tls:
    mode: SIMPLE
    serverCertificate: /etc/istio/ingressgateway-certs/httpbin.example.com.crt
    privateKey: /etc/istio/ingressgateway-certs/httpbin.example.com.key
    hosts:
    - httpbin.example.com
    # 以上是新增内容
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
    name: httpbin
    spec:
    hosts:
    - "httpbin.example.com" ## 需与上面hosts域名对应
    gateways:
    - httpbin-gateway ## 需与上面metadata.name对应
    http:
    - match:
    - uri:
    prefix: /status ## 暴露路由
    - uri:
    prefix: /delay ## 暴露路由 (可以看到/headers和/get的路由暂时没有暴露)
    route:
    - destination:
    port:
    number: 8000
    host: httpbin ## service

背景

在之前做很多的准备工作去搭建OIDC标准,就是为了通过此标准去对接三方系统,这一次我们尝试一下gitlab使用openid connect标准的SSO
搭建过程可查看:OIDC搭建之Ory Hydar 2.0实践

实践

omniauth文档

编辑gitlab.rb文件

OIDC标准,很多填写内容可在 {{baseUrl}}/.well-known/openid-configuration 查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
gitlab_rails['omniauth_enabled'] = true # 开启omniauth
gitlab_rails['omniauth_allow_single_sign_on'] = true # 此处值为true的话,当gitlab不存在该用户时会自动在gitlab中创建用户
gitlab_rails['omniauth_block_auto_created_users'] = false # 是否禁用自动创建的gitlab用户 ,为false则表示自动创建的用户不禁用。为true时则表示禁用,需要gitlab管理员手动解除禁用
gitlab_rails['omniauth_auto_link_user'] = true # 是否自动关联已经存在的gitlab账号


gitlab_rails['omniauth_providers'] = [
{
'name' => 'oauth2_generic',
'app_id' => 'faff0a71-45d5-4636-a91c-ff637888745c', # oauth2的app_id 由sso服务进行分配
'app_secret' => 'TsittyC1.nr4LcBjf8p9ud2E0H', # oauth2的app_secret 由sso服务进行分配
'args' => {
client_options: {
'site' => 'https://api.junyao.com/hydra', # sso的地址
'authorize_url' => '/oauth2/auth', # 认证URL
'token_url' => '/oauth2/token', # 获取token的URL
'user_info_url' => '/userinfo' # 获取用户信息的URL
},
user_response_structure: {
root_path: [], # i.e. if attributes are returned in JsonAPI format (in a 'user' node nested under a 'data' node)
id_path: ['uid'], # 此处的用户信息如何配置 我会在下面详细说明
attributes: { name: 'username', nickname: 'nickname',email:'email'} # 此处的用户信息如何配置 我会在下面详细说明
# optionally, you can add the following two lines to "white label" the display name
# of this strategy (appears in urls and Gitlab login buttons)
# If you do this, you must also replace oauth2_generic, everywhere it appears above, with the new name.
name: 'SSO', # 此处的属性值会在登陆处,以及设置identitifier时使用到,建议英文(不支持中文)
strategy_class: "OmniAuth::Strategies::OAuth2Generic" # Devise-specific config option Gitlab uses to find renamed strategy
}
}
]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
gitlab_rails['omniauth_providers'] = [
{ 'name' => 'openid_connect',
'label' => 'Authing',
'args' => {
'name' => 'openid_connect',
'scope' => ['openid','profile','email','phone'],
'response_type' => 'code',
'issuer': '<oidc_issuer>',
'discovery' => true,
'client_auth_method' => 'basic',
'uid_field' => 'sub',
'client_options' => {
'identifier' => '<oidc_identifier>',
'secret' => '<oidc_secret>',
'redirect_uri' => '<your_gitlab_url>/users/auth/openid_connect/callback'
}
}
}
]
1
sudo gitlab-ctl reconfigure

配置说明

user_response_structure
此处的配置是映射你的sso服务 user_info_url接口返回的用户信息。 如果你的用户信息接口返回的结构为

1
2
3
4
5
6
7
8
9
{
"code":200,
"data":{
"uid":1,
"username":"zhangsan",
"nickname":"张三",
"email":"zhangsan@junyao.com"
}
}

那么 root_path 可以不用配置
id_path建议配置成用户的唯一标识

更多详细注释请参考gitlab官网:omniauth-oauth2-generic

相关

使用 Authing 单点登录 GitLab

N4Kf6E

为什么K8S中选择Gateway是一个纠结的选项,汇总一下可选项吧

  • ingress-nginx 等诸多Kubernetes Ingress
  • istio微服务本身提供了Istio Ingressgateway
  • API Gateway

那么怎么选呢?这里有两种流派:
1、把k8s作为部署平台,不跟他耦合,所有的业务在自己的代码里,包括路由等基本gateway能力、以及熔断等高级gateway逻辑
2、把k8s作为应用的一部分,将API路由、熔断、等等交给k8s或者istio来承载

不管如何,我们还是都走一下吧。

我们尝试使用部署 httpbin 服务,然后分别使用3种网关,来试试3者的不同。

背景

本次我们部署1.25.4最新版本,Kubernetes实践-部署 18年的笔记作为参考。不同之处大概就是docker换成了containerd.

前期准备(所有节点)

Host Ip Description
k8s 10.8.111.200 CentOS7 模板机,用于克隆下面几个节点
k8s-master1 10.8.111.202 CentOS7
k8s-node1 10.8.111.203 CentOS7
k8s-node2 10.8.111.204 CentOS7

本次搭建采用虚拟机,先制作了一个k8s的虚拟机,完成了所有节点都要操作的内容,然后进行克隆3台进行修改,分别配置每台需要操作的内容

每台服务器修改静态ip

1
vi /etc/sysconfig/network-scripts/ifcfg-ensXXX

模板机

1
2
3
4
5
6
7
ONBOOT="yes"
BOOTPROTO=static

IPADDR="10.8.111.200"
GATEWAY="10.8.99.1"
NETMASK="255.255.255.0"
DNS1="114.114.114.114"

各节点根据各自ip规划

1
systemctl restart network

修改主机名和配置 hosts

1
2
3
4
5
6
7
8
# 在10.8.111.200执行
hostnamectl set-hostname k8s
# 在10.8.111.202执行
hostnamectl set-hostname k8s-master1
# 在10.8.111.203执行
hostnamectl set-hostname k8s-node1
# 在10.8.111.204执行
hostnamectl set-hostname k8s-node2

配置 hosts

1
2
3
4
5
6
10.8.111.200 k8s
10.8.111.202 k8s-master1
10.8.111.203 k8s-node1
10.8.111.204 k8s-node2

10.8.111.202 cluster-endpoint

升级操作系统内核

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 导入elrepo gpg key
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

# 安装elrepo YUM源仓库
yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

# 安装kernel-ml版本,ml为长期稳定版本,lt为长期维护版本
yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64

# 设置grub2默认引导为0
grub2-set-default 0

# 重新生成grub2引导文件
grub2-mkconfig -o /boot/grub2/grub.cfg

# 更新后,需要重启,使用升级的内核生效。
reboot

# 重启后,需要验证内核是否为更新对应的版本
uname -r

ipvs 设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14

yum -y install ipvsadm ipset

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

时间同步

1
2
3
4
5
6
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources
# 强制同步一次
chronyc -a makestep

关闭防火墙

1
2
systemctl stop firewalld
systemctl disable firewalld

关闭 swap

1
2
3
4
5
6
# 临时关闭;关闭swap主要是为了性能考虑
swapoff -a
# 可以通过这个命令查看swap是否关闭了
free
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab

禁用 SELinux

1
2
3
4
# 临时关闭
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

允许 iptables 检查桥接流量

为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

装容器 containerd(所有节点)

  • 安装 containerd

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16

    sudo yum install -y yum-utils
    sudo yum-config-manager \才
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

    # 或

    wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo


    sudo yum install containerd.io -y

    systemctl enable containerd
    systemctl start containerd

  • 配置containerd,修改sandbox_image 镜像源

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # 导出默认配置,config.toml这个文件默认是不存在的
    containerd config default > /etc/containerd/config.toml

    # 修改前检查
    grep sandbox_image /etc/containerd/config.toml

    # 修改sandbox_image 镜像源,1.24以下k8s.gcr.io 、1.25 改成了registry.k8s.io
    sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml

    # 修改后检查
    grep sandbox_image /etc/containerd/config.toml
  • 配置containerd cgroup 驱动程序systemd

    kubernets自v1.24.0后,就不再使用docker.shim,替换采用containerd作为容器运行时端点

    1
    2
    # 把SystemdCgroup = false修改为:SystemdCgroup = true,
    sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
  • Containerd配置镜像加速
    endpoint位置添加阿里云的镜像源

    1
    2
    3
    4
    5
    $ vi /etc/containerd/config.toml
    [plugins."io.containerd.grpc.v1.cri".registry]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
    endpoint = ["https://xxxxxxxx.mirror.aliyuncs.com" ,"https://registry-1.docker.io"]
  • 重启 containerd

    1
    2
    3
    systemctl daemon-reload
    systemctl enable --now containerd
    systemctl restart containerd

配置 k8s yum 源(所有节点)

1
2
3
4
5
6
7
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

Kubernetes 安装

开始安装kubeadm,kubelet和kubectl (master节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 不指定版本就是最新版本,当前最新版就是1.25.4
yum install -y kubelet-1.25.4 kubeadm-1.25.4 kubectl-1.25.4 --disableexcludes=kubernetes
# disableexcludes=kubernetes:禁掉除了这个kubernetes之外的别的仓库
# 设置为开机自启并现在立刻启动服务 --now:立刻启动服务
systemctl enable --now kubelet

# 查看状态,这里需要等待一段时间再查看服务状态,启动会有点慢
# 查看服务状态,发现kubelet服务不正常运行
systemctl status kubelet

# 查看版本

kubectl version
yum info kubeadm

# 查看具体报错
journalctl -u kubelet.service

查看日志,发现有报错,报错如下:

1
2
3
4
Nov 30 06:02:22 k8s-200 kubelet[1922]: E1130 06:02:22.353853    1922 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config 
Nov 30 06:02:22 k8s-200 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 30 06:02:22 k8s-200 systemd[1]: Unit kubelet.service entered failed state.
Nov 30 06:02:22 k8s-200 systemd[1]: kubelet.service failed.

解释:未经过 kubeadm init 或者 kubeadm join 后,kubelet 会不断重启,这个是正常现象……,执行 init 或 join 后问题会自动解决,对此官网有如下描述,也就是此时不用理会 kubelet.service。

查看版本

1
2
kubectl version
yum info kubeadm

bbvRqr

查看 Kubernetes 初始化所需镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
kubeadm config images list --kubernetes-version v1.25.4

registry.k8s.io/kube-apiserver:v1.25.4
registry.k8s.io/kube-controller-manager:v1.25.4
registry.k8s.io/kube-scheduler:v1.25.4
registry.k8s.io/kube-proxy:v1.25.4
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.5-0
registry.k8s.io/coredns/coredns:v1.9.3

#查看国内镜像
kubeadm config images list --kubernetes-version v1.25.4 --image-repository registry.aliyuncs.com/google_containers

registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.4
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.25.4
registry.aliyuncs.com/google_containers/kube-scheduler:v1.25.4
registry.aliyuncs.com/google_containers/kube-proxy:v1.25.4
registry.aliyuncs.com/google_containers/pause:3.8
registry.aliyuncs.com/google_containers/etcd:3.5.5-0
registry.aliyuncs.com/google_containers/coredns:v1.9.3

集群初始化

1
2
3
4
5
6
7
8
kubeadm init \
--apiserver-advertise-address=10.8.111.202 \
--image-repository registry.aliyuncs.com/google_containers \
--control-plane-endpoint=cluster-endpoint \
--kubernetes-version v1.25.4 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--v=5
  • –image-repository string: 这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers
  • –kubernetes-version string: 指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(v1.25.4)来跳过网络请求。
  • –apiserver-advertise-address 指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。这里的ip为master节点ip,记得更换。
  • –pod-network-cidr 指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 –pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。
  • --control-plane-endpoint cluster-endpoint 是映射到该 IP 的自定义 DNS 名称,这里配置hosts映射:10.8.111.202 cluster-endpoint。 这将允许你将–control-plane-endpoint=cluster-endpoint 传递给 kubeadm init,并将相同的 DNS 名称传递给 kubeadm join。 稍后你可以修改 cluster-endpoint 以指向高可用性方案中的负载均衡器的地址。
  • --service-cidr 集群内部虚拟网络,Pod统一访问入口

    【温馨提示】kubeadm 不支持将没有 –control-plane-endpoint 参数的单个控制平面集群转换为高可用性集群

重置再初始化

1
2
3
4
5
6
7
8
9
10
11
12
kubeadm reset

rm -fr ~/.kube/ /etc/kubernetes/* var/lib/etcd/*

kubeadm init \
--apiserver-advertise-address=10.8.111.202 \
--image-repository registry.aliyuncs.com/google_containers \
--control-plane-endpoint=cluster-endpoint \
--kubernetes-version v1.25.4 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--v=5

成功后

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join cluster-endpoint:6443 --token 2gaeoh.fq98xja5pkj7n98g \
--discovery-token-ca-cert-hash sha256:95c5de0914011e39149818272161e877f2b654401bdf9433032bc28b059dc06c \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join cluster-endpoint:6443 --token 2gaeoh.fq98xja5pkj7n98g \
--discovery-token-ca-cert-hash sha256:95c5de0914011e39149818272161e877f2b654401bdf9433032bc28b059dc06c

根据成功后的提示,做kubectl认证,配置环境变量

1
2
3
4
5
6
7
8
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 临时生效(退出当前窗口重连环境变量失效)
export KUBECONFIG=/etc/kubernetes/admin.conf
# 永久生效(推荐)
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

发现节点还是有问题,查看日志 cat /var/log/messages,因为没有安装网络插件

“Container runtime network not ready” networkReady=”NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized”

我们先让node加入进来集群,然后安装 Pod 网络插件

node 节点加入 k8s 集群

Host Ip Description
k8s 10.8.111.200 CentOS7 模板机,用于克隆下面几个节点
k8s-master1 10.8.111.202 CentOS7
k8s-node1 10.8.111.203 CentOS7
k8s-node2 10.8.111.204 CentOS7

我们分别将k8s-node1k8s-node2 部署加入集群

先安装 kubelet

1
2
3
4
yum install -y kubelet-1.25.4 kubeadm-1.25.4 kubectl-1.25.4 --disableexcludes=kubernetes
# 设置为开机自启并现在立刻启动服务 --now:立刻启动服务
systemctl enable --now kubelet
systemctl status kubelet

加入集群

1
2
kubeadm join cluster-endpoint:6443 --token 2gaeoh.fq98xja5pkj7n98g \
--discovery-token-ca-cert-hash sha256:95c5de0914011e39149818272161e877f2b654401bdf9433032bc28b059dc06c

如果没有令牌,可以通过在控制平面节点上运行以下命令来获取令牌:

1
kubeadm token list

默认情况下,令牌会在24小时后过期。如果要在当前令牌过期后将节点加入集群, 则可以通过在控制平面节点上运行以下命令来创建新令牌:

1
2
3
kubeadm token create
# 再查看
kubeadm token list

如果你没有 –discovery-token-ca-cert-hash 的值,则可以通过在控制平面节点上执行以下命令链来获取它:

1
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

如果执行 kubeadm init 时没有记录下加入集群的命令,可以通过以下命令重新创建(推荐)一般不用上面的分别获取 token 和 ca-cert-hash 方式,执行以下命令一气呵成:

1
kubeadm token create --print-join-command

查看节点:

1
2
kubectl get pod -n kube-system
kubectl get node

安装 Pod 网络插件

你必须部署一个基于 Pod 网络插件的 容器网络接口 (CNI),以便你的 Pod 可以相互通信。

Flannel

一般来说,在初期使用Flannel是一个稳妥安全的选择,直到你开始需要一些它无法提供的东西。

wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

1
kubectl apply -f kube-flannel.yml

因为墙原因,应该是会安装失败,我们可以ctr image pull将镜像拉下来先,

1
2
3
ctr image pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
ctr image pull docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
ctr image pull docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2

或者ctr image import导入准备好的离线文件。

Calico

Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。

Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。

此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。

1.下载Calico

wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate

vim calico.yaml

1
2
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
1
kubectl apply -f calico.yaml

问题

问题:

1
2
Warning  FailedScheduling  80s (x13 over 61m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
解决:

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,不参与工作负载。允许master节点部署pod即可解决问题,命令如下:

1
kubectl taint nodes --all node-role.kubernetes.io/master-

实际加入node节点即解决了。不建议只有master

退出集群重新加入

master节点

注意:以下操作都是在master下操作。

一:先将节点设置为维护模式(k8s-node1是节点名称)

1
kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets node/k8s-node1 

二:删除节点

1
kubectl delete node k8s-node1

三:确认是否已经删除

1
kubectl get nodes

三:生成永久Token(node加入的时候会用到)

1
2
kubeadm token create --ttl 0 --print-join-command
`kubeadm join 192.168.233.3:6443 --token rpi151.qx3660ytx2ixq8jk --discovery-token-ca-cert-hash sha256:5cf4e801c903257b50523af245f2af16a88e78dc00be3f2acc154491ad4f32a4`#这是生成的Token,node加入时使用,此``是起到注释作用,无其他用途。

四:查看Token确认

1
kubeadm token list

node重新加入

注意:以下操作在node下操作

一:停掉kubelet

1
systemctl stop kubelet

二:删除之前的相关文件

1
2
rm -rf /etc/kubernetes/*
kubeadm reset

三:加入集群

1
kubeadm join 192.168.233.3:6443 --token rpi151.qx3660ytx2ixq8jk --discovery-token-ca-cert-hash sha256:5cf4e801c903257b50523af245f2af16a88e78dc00be3f2acc154491ad4f32a4

相关链接

保姆级 Kubernetes 1.24 高可用集群部署中文指南
【云原生】K8S master节点更换IP以及master高可用故障模拟测试

dapr安装

Install the Dapr CLI

1
dapr init -k

实践

计算器calculator微服务部署之dapr,对标 sidecar构架之istio-k8s部署
2yasH1

后端服务

  • Addition: Go mux application (加法)
    镜像:ghcr.io/dapr/samples/distributed-calculator-go:latest
    容器内端口:6000
  • Multiplication: Python flask application (乘法)
    镜像:ghcr.io/dapr/samples/distributed-calculator-slow-python:latest
    容器内端口:5001
  • Division: Node Express application (除法)
    镜像:ghcr.io/dapr/samples/distributed-calculator-node:latest
    容器内端口:4000
  • Subtraction: .NET Core application (减法)
    镜像:ghcr.io/dapr/samples/distributed-calculator-csharp:latest
    容器内端口:80

前端服务

  • React
    镜像:ghcr.io/dapr/samples/distributed-calculator-react-calculator:latest
    容器内端口:8080

Ji7rVB

上图为该示例应用各个组件的组成和服务架构。

部署

我们可以随便查看一个微服务的部署清单,位于 deploy/ 目录下面,比如 go-adder.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# go-adder.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: addapp
labels:
app: add
spec:
replicas: 1
selector:
matchLabels:
app: add
template:
metadata:
labels:
app: add
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "addapp"
dapr.io/app-port: "6000"
dapr.io/config: "appconfig"
spec:
containers:
- name: add
image: ghcr.io/dapr/samples/distributed-calculator-go:latest
env:
- name: APP_PORT
value: "6000"
ports:
- containerPort: 6000
imagePullPolicy: Always
1
kubectl apply -f deploy/

部署完成后我们可以通过 dapr configurations 命令查看当前集群中的所有配置信息:

1
2
3
➜  dapr configurations -k -A
NAMESPACE NAME TRACING-ENABLED METRICS-ENABLED AGE CREATED
default appconfig true true 1m 2022-09-20 17:01.21

应用部署完成后查看 Pod 的状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
➜  kubectl get pods
NAME READY STATUS RESTARTS AGE
addapp-84c9764fdb-72mxf 2/2 Running 0 74m
calculator-front-end-59cbb6658c-rbctf 2/2 Running 0 74m
divideapp-8476b7fbb6-kr8dr 2/2 Running 0 74m
multiplyapp-7c45fbbf99-hrmff 2/2 Running 0 74m
subtractapp-58645db87-25tg9 2/2 Running 0 62m
➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
addapp-dapr ClusterIP None <none> 80/TCP,50001/TCP,50002/TCP,9090/TCP 8m29s
calculator-front-end LoadBalancer 10.110.177.32 <pending> 80:31701/TCP 8m29s
calculator-front-end-dapr ClusterIP None <none> 80/TCP,50001/TCP,50002/TCP,9090/TCP 8m29s
divideapp-dapr ClusterIP None <none> 80/TCP,50001/TCP,50002/TCP,9090/TCP 8m29s
multiplyapp-dapr ClusterIP None <none> 80/TCP,50001/TCP,50002/TCP,9090/TCP 8m29s
subtractapp-dapr ClusterIP None <none> 80/TCP,50001/TCP,50002/TCP,9090/TCP 8m29s
zipkin NodePort 10.108.46.223 <none> 9411:32411/TCP 16m

部署完成后我们可以通过 calculator-front-end 这个 LoadBalancer 类型的 Service 去访问计算器的前端应用,我们这里分配的 EXTERNAL-IP 地址为 。因为本机没有接LB,,可以改成nodeport,或者直接port-forward一下

1
2
3
4
kubectl port-forward service/calculator-front-end 8000:80

Forwarding from 127.0.0.1:8000 -> 8080
Forwarding from [::1]:8000 -> 8080

Curl 测试

operands.json

1
{"operandOne":"52","operandTwo":"34"}

persist.json

1
[{"key":"calculatorState","value":{"total":"54","next":null,"operation":null}}]
1
2
3
4
5
6
curl -s http://localhost:8080/calculate/add -H Content-Type:application/json --data @operands.json
curl -s http://localhost:8080/calculate/subtract -H Content-Type:application/json --data @operands.json
curl -s http://localhost:8080/calculate/divide -H Content-Type:application/json --data @operands.json
curl -s http://localhost:8080/calculate/multiply -H Content-Type:application/json --data @operands.json
curl -s http://localhost:8080/persist -H Content-Type:application/json --data @persist.json
curl -s http://localhost:8080/state

结果

1
2
3
4
5
6
86
18
1.5294117647058822
1768

{"operation":null,"total":"54","next":null}

前端服务调用后端服务

当前端服务器调用各自的操作服务时(见下面的server.js代码),它不需要知道它们所在的IP地址或它们是如何构建的。相反,它通过名称调用其本地 dapr side-car,它知道如何调用服务上的方法,利用平台的服务发现机制,在本例中为 Kubernetes DNS 解析。

${daprUrl}/${微服务名称}/method/${服务接口路由}

1
如:${daprUrl}/addapp/method/add
  • addapp 我们的加法微服务
  • add 我们的接口路由
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    const daprUrl = `http://localhost:${daprPort}/v1.0/invoke`;

    app.post('/calculate/add', async (req, res) => {
    const appResponse = await axios.post(`${daprUrl}/addapp/method/add`, req.body);
    return res.send(`${appResponse.data}`);
    });


    app.post('/calculate/subtract', async (req, res) => {
    const appResponse = await axios.post(`${daprUrl}/subtractapp/method/subtract`, req.body);
    return res.send(`${appResponse.data}`);
    });

相关

Dapr 可观测性之分布式追踪
distributed-calculator

istio安装

命令行下载Istio

1
curl -L https://istio.io/downloadIstio | sh -

安装目录包含:

  • samples/ 目录下的示例应用程序
  • bin/ 目录下的 istioctl 客户端二进制文件。

把bin目录下的istioctl添加到PATH,方便后续使用istioctl命令操作。

1
export PATH=$PWD/bin:$PATH

安装istio:

1
istioctl install --set profile=demo -y

卸载

1
istioctl uninstall --purge

注入代理

给命名空间添加标签,指示 Istio 在部署应用的时候,自动注入 Envoy 边车代理:

1
kubectl label namespace default istio-injection=enabled

实践

  • 部署
    1
    2
    kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
    kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
  • 将LB修改为NodePort
    1
    kubectl patch service istio-ingressgateway -n istio-system -p '{"spec":{"type":"NodePort"}}'
    查看80绑定的端口为30984
    1
    kubectl -n istio-system get service istio-ingressgateway
  • 本地
    1
    2
    export INGRESS_HOST=127.0.0.1
    export INGRESS_PORT=30984
  • 最终网关地址
    1
    export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

背景

我们前期有使用过
Ory Hydra之OAuth 2.0 Authorize Code Flow
Ory Hydra之Oauth 2.0 Client Credentials flow
当时采用的并非2.0,本次完整的使用2.0完整的走一遍,并完整的讲解,如何在授权认证流程对接自己的用户系统。

部署

./docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
version: "3.7"
services:
hydra:
image: oryd/hydra:v2.0.2
ports:
- "4444:4444" # Public port
- "4445:4445" # Admin port
- "5555:5555" # Port for hydra token user
command: serve -c /etc/config/hydra/hydra.yml all --dev
volumes:
- type: bind
source: ./config
target: /etc/config/hydra
environment:
- DSN=postgres://hydra:secret@postgresd:5432/hydra?sslmode=disable&max_conns=20&max_idle_conns=4
restart: unless-stopped
depends_on:
- hydra-migrate
networks:
- intranet
hydra-migrate:
image: oryd/hydra:v2.0.2
environment:
- DSN=postgres://hydra:secret@postgresd:5432/hydra?sslmode=disable&max_conns=20&max_idle_conns=4
command: migrate -c /etc/config/hydra/hydra.yml sql -e --yes
volumes:
- type: bind
source: ./config
target: /etc/config/hydra
restart: on-failure
networks:
- intranet
consent:
environment:
- HYDRA_ADMIN_URL=http://hydra:4445
image: oryd/hydra-login-consent-node:v2.0.2
ports:
- "3000:3000"
restart: unless-stopped
networks:
- intranet
postgresd:
image: postgres:11.8
ports:
- "5432:5432"
environment:
- POSTGRES_USER=hydra
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=hydra
networks:
- intranet
networks:
intranet:

./config/hydra.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
serve:
cookies:
same_site_mode: Lax

urls:
self:
issuer: http://127.0.0.1:4444
consent: http://127.0.0.1:3000/consent
login: http://127.0.0.1:3000/login
logout: http://127.0.0.1:3000/logout

secrets:
system:
- youReallyNeedToChangeThis

oidc:
subject_identifiers:
supported_types:
- pairwise
- public
pairwise:
salt: youReallyNeedToChangeThis

演示

Authorization Code Grant && client credentials Grant

创建客户端

2.0开始,不需要client_id,自动生成一个uuid,client_secret不填写,会自动生成。

  • POST请求 http://localhost:4445/admin/clients
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    {
    "client_name": "crm",
    "token_endpoint_auth_method": "client_secret_basic",
    "redirect_uris": [
    "http://127.0.0.1:5555/callback"
    ],
    "scope": "openid offline",
    "grant_types": [
    "authorization_code",
    "refresh_token",
    "implicit",
    "client_credentials"
    ],
    "response_types": [
    "code",
    "id_token",
    "token"
    ]
    }
  • 响应
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    {
    "client_id": "a9ea2e4c-5c9e-4edd-8a53-09124b870477",
    "client_name": "crm",
    "client_secret": "A2pnWQdJPYokBG9SvN3zKbnlKL",
    "redirect_uris": [
    "http://127.0.0.1:5555/callback"
    ],
    "grant_types": [
    "authorization_code",
    "refresh_token",
    "implicit",
    "client_credentials"
    ],
    "response_types": [
    "code",
    "id_token",
    "token"
    ],
    "scope": "openid offline",
    "audience": [],
    "owner": "",
    "policy_uri": "",
    "allowed_cors_origins": [],
    "tos_uri": "",
    "client_uri": "",
    "logo_uri": "",
    "contacts": null,
    "client_secret_expires_at": 0,
    "subject_type": "public",
    "jwks": {},
    "token_endpoint_auth_method": "client_secret_basic",
    "userinfo_signed_response_alg": "none",
    "created_at": "2022-11-07T07:14:24Z",
    "updated_at": "2022-11-07T07:14:23.930344Z",
    "metadata": {},
    "registration_access_token": "ory_at_E71s0oXkgZJfLeVn4r7dYsvyanvauuPn6AiQ0uGoh2M.a4MwTXBT6z7rRGVdLK_Cmi-rNF_EH09MymOwpBB6QaE",
    "registration_client_uri": "http://127.0.0.1:4444/oauth2/register/a9ea2e4c-5c9e-4edd-8a53-09124b870477",
    "authorization_code_grant_access_token_lifespan": null,
    "authorization_code_grant_id_token_lifespan": null,
    "authorization_code_grant_refresh_token_lifespan": null,
    "client_credentials_grant_access_token_lifespan": null,
    "implicit_grant_access_token_lifespan": null,
    "implicit_grant_id_token_lifespan": null,
    "jwt_bearer_grant_access_token_lifespan": null,
    "refresh_token_grant_id_token_lifespan": null,
    "refresh_token_grant_access_token_lifespan": null,
    "refresh_token_grant_refresh_token_lifespan": null
    }

请求与响应

普遍的流程为以下三个步骤

  • 1、授权请求 Authorization Request 浏览器打开

    1
    2
    3
    4
    5
    6
    7
    8
    GET {认证终点}
    ?response_type=code // 必选项
    &client_id={客户端的ID} // 必选项
    &redirect_uri={重定向URI} // 可选项
    &scope={申请的权限范围} // 可选项
    &state={任意值} // 推荐
    HTTP/1.1
    HOST: {认证服务器}
  • 2、授权响应 Authorization Response 获取code

    1
    2
    3
    4
    HTTP/1.1 302 Found
    Location: {重定向URI}
    ?code={授权码} // 必填
    &state={任意文字} // 如果授权请求中包含 state的话那就是必填
  • 3、令牌请求 Access Token Request code换token

    1
    2
    3
    4
    5
    6
    7
    8
    POST {令牌终点} HTTP/1.1
    Host: {认证服务器}
    Content-Type: application/x-www-form-urlencoded

    grant_type=authorization_code // 必填
    &code={授权码} // 必填 必须是认证服务器响应给的授权码
    &redirect_uri={重定向URI} // 如果授权请求中包含 redirect_uri 那就是必填
    &code_verifier={验证码} // 如果授权请求中包含 code_challenge 那就是必填

我们按照上面这三个步骤来讲解一下Hydra是怎么做的。

  • 1、Hydra 授权请求 Authorization Request 浏览器打开
    1
    2
    3
    4
    5
    GET http://127.0.0.1:4444/oauth2/auth
    ?response_type=code
    &client_id=a9ea2e4c-5c9e-4edd-8a53-09124b870477
    &scope=openid offline
    &state=nqvresaazswwbofkeztgnvfs
    http://127.0.0.1:4444/oauth2/auth?response_type=code&client_id=a9ea2e4c-5c9e-4edd-8a53-09124b870477&scope=openid offline&state=nqvresaazswwbofkeztgnvfs

打开后我们发现,我们被重定向到了http://127.0.0.1:3000/login?login_challenge=9ba37003126244608ab2d4501f9b32f5

Hydra通过步骤1链接步骤2获取code,抽象为两个流程:Login和Consent,这两个流程便于我们对接我们自己系统的用户授权认证.Login流程主要为 登录认证,Consent流程主要为 授权

我来看看./config/hydra.yml中的配置

1
2
3
consent: http://127.0.0.1:3000/consent   // 授权(前端)
login: http://127.0.0.1:3000/login // 登录认证(前端)
logout: http://127.0.0.1:3000/logout // 登出

Login流程

我们发现重定向的位置就是配置中的login,Login流程是一个登录认证服务(前后端),需要我们在自己业务中实现,链接中还携带了login_challenge,

  • 前端
    此时,我们把账号密码以及login_challenge通过接口发往我们后端
    7Kg0sn

  • 后端
    后端拿到用户名密码和login_challenge,做如下2件事

1、自己业务系统的用户名密码校验
2、携带用户信息和login_challenge调用acceptLoginRequest登录请求

登录请求

请求地址:
http://127.0.0.1:4445/admin/oauth2/auth/requests/login/accept?login_challenge=66cc8259bf0c4a3880e26c189968bbd6
请求方式:PUT
请求类型:application/json
请求参数:

1
2
3
4
5
6
7
8
{
"subject": "foo@bar.com",
"acr": "1",
"context": {},
"force_subject_identifier": "2",
"remember": false,
"remember_for": -4068005
}

请求成功返回:

1
2
3
{
"redirect_to": "http://127.0.0.1:4444/oauth2/auth?client_id=624b45d4-ef0f-4bec-a6be-9c18e7103c3e&login_verifier=b458cfc4152a4d9389fc52413087c020&response_type=code&scope=openid+offline&state=nqvresaazswwbofkeztgnvfs"
}

打开这个重定向,就进入了Consent流程

Consent流程

  • 前端
    此时,我们把用户授权以及consent_challenge通过接口发往我们后端
    miv0HD

  • 后端
    后端拿到用户授权以及consent_challenge,做如下2件事
    1、自己业务系统的授权
    2、consent_challenge调用acceptLoginRequest认证请求

认证请求

请求地址:http://127.0.0.1:4445/admin/oauth2/auth/requests/consent/accept?consent_challenge=xxxxxx
请求方式:PUT
请求类型:application/json
请求参数:

说明下,session是可以写你想要放进id_token里面的东西,但是但是!请不要有中文,比如说:”name”:”小白”,这样Hydra也无法识别

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
"grant_access_token_audience": [],
"grant_scope": [
"openid",
"offline",
],
"handled_at": "2019-04-16T04:45:05.685Z",
"remember": false,
"remember_for": -72766940,
"session": {
"access_token": {},
"id_token": {
"userId": "111"
}
}
}

请求成功返回:

1
2
3
{
"redirect_to": "http://127.0.0.1:4444/oauth2/auth?client_id=624b45d4-ef0f-4bec-a6be-9c18e7103c3e&consent_verifier=2f77bd26c6504ccb8a5e88d65a5818b7&response_type=code&scope=openid+offline&state=nqvresaazswwbofkeztgnvfs"
}

重定向打开后,完成Consent流程,获取到code,此时会将code携带到我们创建应用时候的redirect_uris=http://127.0.0.1:5555/callback

1
http://127.0.0.1:5555/callback?code=ory_ac_0T0UehFyo-BVCDcdiu2qUuxLw4jNLpwFDjqkC157-ms.eUZpm0ZokBUBdxgEI5y5w8BTjf1URAzMwwXddW3gf4Q&scope=openid+offline&state=nqvresaazswwbofkeztgnvfs

token获取

获取令牌、刷新令牌

请求地址:
http://127.0.0.1:5444/oauth2/token
请求方式:POST
请求类型:application/x-www-form-urlencoded

请求参数 参数类型 参数说明
grant_type 字符串 授予类型,必填项
code 字符串 授权码
refresh_token 字符串 刷新令牌
client_id 字符串 客户端id,必填项
client-secret 字符串 客户端秘钥,必填项
redirect_uri 字符串 重定向uri

Authorization使用 Basic Auth 将client_id和client_secret写入

1
2
3
4
5
6
{
"grant_type": "authorization_code",
"client_id": "facebook-photo-backup",
"redirect_uri": "http://localhost:9020/login"
"code": "Qk4jf3dZ_DSkAAtlbS9pTilVFTRCeAYHdPpUN"
}

nN5AYX

Yt3jVA

返回

1
2
3
4
5
6
7
8
{
"access_token": "ory_at_D0wqbtwGY_rtFdy_wEfqhyQGmD7V358y8XWw_94AvGM.cEbzrl27tVxIQO6fJMDBJgCO72OAenBuZgXv1VPUrDc",
"expires_in": 3600,
"id_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjhmYjRlMjlhLTZlZmItNGIxMy04ODM2LTM5M2ZjM2I1NWUyOSIsInR5cCI6IkpXVCJ9.eyJhY3IiOiJsYWJvIiwiYXRfaGFzaCI6InRiMjcza0kwWWhkYk52by0zQ0FKYmciLCJhdWQiOlsiYTllYTJlNGMtNWM5ZS00ZWRkLThhNTMtMDkxMjRiODcwNDc3Il0sImF1dGhfdGltZSI6MTY2NzgwODIxNSwiZXhwIjoxNjY3ODExODYyLCJpYXQiOjE2Njc4MDgyNjIsImlzcyI6Imh0dHA6Ly8xMjcuMC4wLjE6NDQ0NCIsImp0aSI6ImNlN2UwNGI4LTI1ODUtNGQzYi1hMjBhLTA1OThmMDdjZDUyOCIsInJhdCI6MTY2NzgwODIwMiwic2lkIjoiMDE4ZDc3MWEtMjQyNi00YzhhLWFmZjQtYjQ3MGJlYWM5NGI0Iiwic3ViIjoiZm9vQGJhci5jb20iLCJ1c2VySWQiOiIxMTEifQ.FuUDY0w94H9SPFr8iakHvEo63w9RTVqjHgjzi7gngHgL6sRV3yP9-hZrc4HBZys_PFT5KP_bQra3IKqM-OhF9UZZnfXM4je6HSAW8XdX0PbMZQGut1_5jh8rZjqXPJNY_YL2CNnm4YhID7CO-sEIqcBrVu1O30l44cC93NJJbU9N8wrlHf4H2ROoUhkpPl8WSoRDviUX0NB6dg3Y87q8MDLUTjvQpLNK7SejSI9c6AzNyQneGYBVAVksxItluulWcLgjM98gmZ_35jge5KeOel8q0kpdjbKIOfDCva8PibXoSWZtIvCi4EHYE2aSvu5TL1NlaDhkzE-tuuxjmQJJdIeLOy-kcDFd63t-l3k9dy859UM7B6BNKKFcHmc5bkg2BRf7iZxc7Q6BEvi2F7mrsThJYFtpTNjQCCOsO-E3d2WXi7uFwSI_qQpE5eAcBa0-qivv8RHUqiFIhNDNp1WYk2yDCgqeQx3NokZ03N4oM_CWCnyt2M0WKnofPL0YpnZXiIzxM_KvnqTfZy9ckoVj7gf1H9yZkhQunQVx2oIFIcEqshA1cbvtJ-XN5mZgLAnwSYN3_vRUsW3GQIP4GCT8zf8CVIW-7H5JkWSSLs2DrnexwYEuMX-6TttytflF1FNru4TfF539z9HkBp35-aa8xvh1j-GFSXwlaUr2KKjeTDU",
"refresh_token": "ory_rt_PNaBzXflt2ICDwbh7j68eerGO-8HEtC8S6WzUlTNknQ.6ih8eHalnW5zNwHM2RQktv1WDjSOw3S7mwzUutMMLk8",
"scope": "openid offline",
"token_type": "bearer"
}

最后我们拿着idToken去JWT解析看看效果可以看到,idToken解析出来你需要的信息。
至此我们就获取到了访问令牌access_token,前端可以将令牌缓存在cookie或session中,相应的后台也会缓存,后面前端调用其他服务时携带令牌调用接口,后台校验根据token来判断是否放行。

vXRiND

相关

5min-tutorial
jq

cli相关操作

cli 创建客户端

1
2
3
4
5
6
7
8
9
10
11
code_client=$(docker-compose -f docker-compose.yml exec hydra \
hydra create client \
--endpoint http://127.0.0.1:4445 \
--grant-type authorization_code,refresh_token \
--response-type code,id_token \
--format json \
--scope openid --scope offline \
--redirect-uri http://127.0.0.1:5555/callback)

code_client_id=$(echo $code_client | jq -r '.client_id')
code_client_secret=$(echo $code_client | jq -r '.client_secret')

使用hydra示例授权(Hydra 提供快速验证oauth授权流程)

1
2
3
4
5
6
7
docker-compose -f docker-compose.yml exec hydra \
hydra perform authorization-code \
--client-id $code_client_id \
--client-secret $code_client_secret \
--endpoint http://127.0.0.1:4444/ \
--port 5555 \
--scope openid --scope offline
0%