|
Bugzilla – Full Text Bug Listing |
| Summary: | VUL-0: CVE-2020-8565: kubernetes: Incomplete fix for CVE-2019-11250 allows for token leak in logs when logLevel >= 9 | ||
|---|---|---|---|
| Product: | [Novell Products] SUSE Security Incidents | Reporter: | Marcus Meissner <meissner> |
| Component: | Incidents | Assignee: | package coldpool <coldpool> |
| Status: | RESOLVED FIXED | QA Contact: | Security Team bot <security-team> |
| Severity: | Normal | ||
| Priority: | P3 - Medium | CC: | cathy.hu, coldpool, containers-bugowner, dcermak, dsauer, gianluca.gabrielli, jmassaguerpla, kkaempf, meissner, mloviska, pgajdos, thomas.leroy, veronika.svecova |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | Other | ||
| OS: | Other | ||
| URL: | https://smash.suse.de/issue/269114/ | ||
| Whiteboard: | CVSSv3.1:SUSE:CVE-2020-8565:4.7:(AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N) | ||
| Found By: | --- | Services Priority: | |
| Business Priority: | Blocker: | --- | |
| Marketing QA Status: | --- | IT Deployment: | --- |
|
Description
Marcus Meissner
2020-10-14 06:15:54 UTC
CRD: 2020-10-14 now public via kubernetes advisory k8s 1.17.13 is now scheduled for inclusion of CaaSP 4.2.4. Also submitted to Kubic kubernetes1.17 package k8s 1.18.10 is ready to be tested for CaaSP 4.5 (should be in 4.5.2). Kubic kubernetes1.18 was already up-to-date as of earlier today. SUSE-SU-2020:3761-1: An update that solves four vulnerabilities and has 11 fixes is now available. Category: security (important) Bug References: 1172270,1173055,1173165,1174219,1174951,1175352,1176225,1176578,1176903,1176904,1177361,1177362,1177660,1177661,1178785 CVE References: CVE-2020-15106,CVE-2020-8029,CVE-2020-8564,CVE-2020-8565 JIRA References: Sources used: SUSE CaaS Platform 4.5 (src): caasp-release-4.5.2-1.8.2, cri-o-1.18-1.18.4-4.3.2, etcd-3.4.13-3.3.1, helm2-2.16.12-3.3.1, helm3-3.3.3-3.8.1, kubernetes-1.18-1.18.10-4.3.1, patterns-caasp-Management-4.5-3.3.1, skuba-2.1.11-3.10.1, velero-1.4.2-3.3.1 NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination. SUSE-SU-2020:3760-1: An update that fixes 8 vulnerabilities is now available. Category: security (moderate) Bug References: 1174219,1174951,1176752,1176753,1176754,1176755,1177661,1177662 CVE References: CVE-2020-15106,CVE-2020-15112,CVE-2020-15184,CVE-2020-15185,CVE-2020-15186,CVE-2020-15187,CVE-2020-8565,CVE-2020-8566 JIRA References: Sources used: SUSE Linux Enterprise Module for Containers 15-SP1 (src): kubernetes-1.17.13-4.21.2 SUSE CaaS Platform 4.0 (src): caasp-release-4.2.4-24.36.1, cri-o-1.16.1-3.37.3, etcd-3.4.13-4.15.1, helm-2.16.12-3.10.1, kubernetes-1.17.13-4.21.2, skuba-1.4.11-3.49.2, terraform-provider-aws-2.59.0-1.6.1 NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination. CVE-2020-8565 In Kubernetes, if the logging level is set to at least 9, authorization and bearer tokens will be written to log files. This can occur both in API server logs and client tool output like kubectl. This affects <= v1.19.3, <= v1.18.10, <= v1.17.13, < v1.20.0-alpha2. https://github.com/kubernetes/kubernetes/issues/95623 https://github.com/kubernetes/kubernetes/pull/95316/commits/f0f52255412cbc6834bd225a59608ebb4a0d399b This also means 15sp2/kubernetes1.18 is affected as well. 15sp2/kubernetes1.18, 12/kubernetes submitted. The maintainership of 12/kubernetes issue is still not resolved, though. Reassigning back to coldpool, the fix causes a regression described in bug 1204003. I would hold the 12/kubernetes update, too. It is interesting that the one-line fix from comment 0 is the only change between 1.18.10 and 1.18.20 in the round_trippers.go source. To be honest, I am not sure about the direction currently, perhaps we will have to try to reproduce the segfault manually in order to find the root cause of the regression. I had got following hits from Martin Loviska: https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/lib/publiccloud/utils.pm#L224 https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/containers/helm.pm (In reply to Petr Gajdos from comment #20) > I had got following hits from Martin Loviska: > https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/lib/ > publiccloud/utils.pm#L224 > https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/ > containers/helm.pm s/hits/hints/ ;) Not much progress. When 5.8.1 is installed on openQA machine, then it crashes. When it is downgraded to 5.5.1 it does not segfault. If it is then upgraded again to 5.8.1, it does not crash (neither version on create namespace commands).
The difference is that expiry and access-token entries are added in /root/.kube/config during successful run of 5.5.1 kubectl. If they are removed again (or just access-token), kubectl version (or create namespace) will start segfault again with 5.8.1.
Segfault happens somewhere here:
epoll_ctl(4, EPOLL_CTL_DEL, 3, 0xc000807d54) = -1 EPERM (Operation not permitted)
fstat(3, {st_mode=S_IFREG|0600, st_size=2775, ...}) = 0
read(3, "apiVersion: v1\nclusters:\n- clust"..., 3287) = 2775
read(3, "", 512) = 0
close(3) = 0
[SEGFAULT]
newfstatat(AT_FDCWD, "/root/.kube", {st_mode=S_IFDIR|0755, st_size=64, ...}, 0) = 0
openat(AT_FDCWD, "/root/.kube/config", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0600) = 3
epoll_ctl(4, EPOLL_CTL_ADD, 3, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=512237320, u64=140317093797640}}) = -1 EPERM (Operation not permitted)
epoll_ctl(4, EPOLL_CTL_DEL, 3, 0xc000807f3c) = -1 EPERM (Operation not permitted)
I. e. before opening .kube/config for writing.
In addition to the comment 22, the situation could be even worse with 1.18.20 as it segfaults on 'kubectl version' with or without access-token and expiry entries of .kube/version, as far as I tested. For 1.18.10-150200.5.8.4 (regressed version):
# rm -r .kube
# gcloud container clusters get-credentials qe-c-testing --zone europe-west4-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for qe-c-testing.
# kubectl create namespace helm-ns-10
unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f]
goroutine 1 [running]:
runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?})
/usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0 sp=0xc00043cf70 pc=0x55a8c3cd797f
runtime.sigpanic()
[...]
# kubectl create namespace helm-ns-10
namespace/helm-ns-10 created
#
The difference between first and second kubectl call is in .kube dir, cache, config.lock and http-cache is present there after first call.
# rm -r .kube/{cache,config.lock,http-cache}
# kubectl create namespace helm-ns-11
[crash]
# kubectl create namespace helm-ns-11
namespace/helm-ns-11 created
#
(In reply to Petr Gajdos from comment #24) > For 1.18.10-150200.5.8.4 (regressed version): > > # rm -r .kube > # gcloud container clusters get-credentials qe-c-testing --zone > europe-west4-a > Fetching cluster endpoint and auth data. > kubeconfig entry generated for qe-c-testing. > # kubectl create namespace helm-ns-10 > unexpected fault address 0x0 > fatal error: fault > [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f] > > goroutine 1 [running]: > runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?}) > /usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0 > sp=0xc00043cf70 pc=0x55a8c3cd797f > runtime.sigpanic() > [...] > # kubectl create namespace helm-ns-10 > namespace/helm-ns-10 created > # > > The difference between first and second kubectl call is in .kube dir, cache, > config.lock and http-cache is present there after first call. > > # rm -r .kube/{cache,config.lock,http-cache} > # kubectl create namespace helm-ns-11 > [crash] > # kubectl create namespace helm-ns-11 > namespace/helm-ns-11 created > # We decided to declare SUSE:SLE-12:Update/kubernetes as wontfix. Using kubectl with a verbosity lower than 9 (eg. `kubectl -v=8 ...`) mitigates the issue. Anyway, thank you for the efforts Petr :) (In reply to Thomas Leroy from comment #25) > (In reply to Petr Gajdos from comment #24) > > For 1.18.10-150200.5.8.4 (regressed version): > > > > # rm -r .kube > > # gcloud container clusters get-credentials qe-c-testing --zone > > europe-west4-a > > Fetching cluster endpoint and auth data. > > kubeconfig entry generated for qe-c-testing. > > # kubectl create namespace helm-ns-10 > > unexpected fault address 0x0 > > fatal error: fault > > [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f] > > > > goroutine 1 [running]: > > runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?}) > > /usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0 > > sp=0xc00043cf70 pc=0x55a8c3cd797f > > runtime.sigpanic() > > [...] > > # kubectl create namespace helm-ns-10 > > namespace/helm-ns-10 created > > # > > > > The difference between first and second kubectl call is in .kube dir, cache, > > config.lock and http-cache is present there after first call. > > > > # rm -r .kube/{cache,config.lock,http-cache} > > # kubectl create namespace helm-ns-11 > > [crash] > > # kubectl create namespace helm-ns-11 > > namespace/helm-ns-11 created > > # > > We decided to declare SUSE:SLE-12:Update/kubernetes as wontfix. Using > kubectl with a verbosity lower than 9 (eg. `kubectl -v=8 ...`) mitigates the > issue. Anyway, thank you for the efforts Petr :) Thank you. However, I am talking here about 15sp2/kubernetes1.18. 12/kubernetes has version 1.3. (In reply to Petr Gajdos from comment #26) > (In reply to Thomas Leroy from comment #25) > > (In reply to Petr Gajdos from comment #24) > > > For 1.18.10-150200.5.8.4 (regressed version): > > > > > > # rm -r .kube > > > # gcloud container clusters get-credentials qe-c-testing --zone > > > europe-west4-a > > > Fetching cluster endpoint and auth data. > > > kubeconfig entry generated for qe-c-testing. > > > # kubectl create namespace helm-ns-10 > > > unexpected fault address 0x0 > > > fatal error: fault > > > [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f] > > > > > > goroutine 1 [running]: > > > runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?}) > > > /usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0 > > > sp=0xc00043cf70 pc=0x55a8c3cd797f > > > runtime.sigpanic() > > > [...] > > > # kubectl create namespace helm-ns-10 > > > namespace/helm-ns-10 created > > > # > > > > > > The difference between first and second kubectl call is in .kube dir, cache, > > > config.lock and http-cache is present there after first call. > > > > > > # rm -r .kube/{cache,config.lock,http-cache} > > > # kubectl create namespace helm-ns-11 > > > [crash] > > > # kubectl create namespace helm-ns-11 > > > namespace/helm-ns-11 created > > > # > > > > We decided to declare SUSE:SLE-12:Update/kubernetes as wontfix. Using > > kubectl with a verbosity lower than 9 (eg. `kubectl -v=8 ...`) mitigates the > > issue. Anyway, thank you for the efforts Petr :) > > Thank you. However, I am talking here about 15sp2/kubernetes1.18. > 12/kubernetes has version 1.3. Indeed, sorry Everything done, closing |