Bug 1177661 (CVE-2020-8565) - VUL-0: CVE-2020-8565: kubernetes: Incomplete fix for CVE-2019-11250 allows for token leak in logs when logLevel >= 9
Summary: VUL-0: CVE-2020-8565: kubernetes: Incomplete fix for CVE-2019-11250 allows fo...
Status: RESOLVED FIXED
Alias: CVE-2020-8565
Product: SUSE Security Incidents
Classification: Novell Products
Component: Incidents (show other bugs)
Version: unspecified
Hardware: Other Other
: P3 - Medium : Normal
Target Milestone: ---
Assignee: package coldpool
QA Contact: Security Team bot
URL: https://smash.suse.de/issue/269114/
Whiteboard: CVSSv3.1:SUSE:CVE-2020-8565:4.7:(AV:L...
Keywords:
Depends on:
Blocks:
 
Reported: 2020-10-14 06:15 UTC by Marcus Meissner
Modified: 2022-11-15 12:31 UTC (History)
13 users (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Marcus Meissner 2020-10-14 06:15:54 UTC
via prenotify

Hello Kubernetes Community,

Multiple security issues have been discovered in Kubernetes that allow 
for the exposure of secret data in logs, when verbose logging options 
are enabled. These issues have all been rated Medium 
CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N 
<https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N>(4.7)

*   CVE-2020-8565: Incomplete fix for CVE-2019-11250 allows for token
    leak in logs when logLevel >= 9
    <https://docs.google.com/document/d/1BOIEbQXXa6d0yBhwkDFpcCdHzpjKqE3zwBXuEUS0NiE/edit?ts=5f75ea63#heading=h.e2xe73w6hvxg>

affects all kubernetes releases

*    CVE-2020-8565 - https://github.com/kubernetes/kubernetes/pull/95316

If sufficient verbose logging is enabled, the following secrets can be 
exposed in logs:
*   CVE-2020-8565 - Kubernetes authorization tokens (incl. bearer tokens
    and basic auth)


*   CVE-2020-8565 - Patrick Rhomberg (purelyapplied)
Comment 1 Marcus Meissner 2020-10-14 06:16:21 UTC
CRD: 2020-10-14
Comment 4 Marcus Meissner 2020-10-16 08:07:51 UTC
now public via kubernetes advisory
Comment 5 Danny Sauer 2020-10-16 22:06:05 UTC
k8s 1.17.13 is now scheduled for inclusion of CaaSP 4.2.4.  Also submitted to Kubic kubernetes1.17 package

k8s 1.18.10 is ready to be tested for CaaSP 4.5 (should be in 4.5.2).  Kubic kubernetes1.18 was already up-to-date as of earlier today.
Comment 8 Swamp Workflow Management 2020-12-11 17:16:51 UTC
SUSE-SU-2020:3761-1: An update that solves four vulnerabilities and has 11 fixes is now available.

Category: security (important)
Bug References: 1172270,1173055,1173165,1174219,1174951,1175352,1176225,1176578,1176903,1176904,1177361,1177362,1177660,1177661,1178785
CVE References: CVE-2020-15106,CVE-2020-8029,CVE-2020-8564,CVE-2020-8565
JIRA References: 
Sources used:
SUSE CaaS Platform 4.5 (src):    caasp-release-4.5.2-1.8.2, cri-o-1.18-1.18.4-4.3.2, etcd-3.4.13-3.3.1, helm2-2.16.12-3.3.1, helm3-3.3.3-3.8.1, kubernetes-1.18-1.18.10-4.3.1, patterns-caasp-Management-4.5-3.3.1, skuba-2.1.11-3.10.1, velero-1.4.2-3.3.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.
Comment 9 Swamp Workflow Management 2020-12-11 17:18:26 UTC
SUSE-SU-2020:3760-1: An update that fixes 8 vulnerabilities is now available.

Category: security (moderate)
Bug References: 1174219,1174951,1176752,1176753,1176754,1176755,1177661,1177662
CVE References: CVE-2020-15106,CVE-2020-15112,CVE-2020-15184,CVE-2020-15185,CVE-2020-15186,CVE-2020-15187,CVE-2020-8565,CVE-2020-8566
JIRA References: 
Sources used:
SUSE Linux Enterprise Module for Containers 15-SP1 (src):    kubernetes-1.17.13-4.21.2
SUSE CaaS Platform 4.0 (src):    caasp-release-4.2.4-24.36.1, cri-o-1.16.1-3.37.3, etcd-3.4.13-4.15.1, helm-2.16.12-3.10.1, kubernetes-1.17.13-4.21.2, skuba-1.4.11-3.49.2, terraform-provider-aws-2.59.0-1.6.1

NOTE: This line indicates an update has been released for the listed product(s). At times this might be only a partial fix. If you have questions please reach out to maintenance coordination.
Comment 14 Petr Gajdos 2022-09-26 12:28:47 UTC
CVE-2020-8565
	
In Kubernetes, if the logging level is set to at least 9, authorization and bearer tokens will be written to log files. This can occur both in API server logs and client tool output like kubectl. This affects <= v1.19.3, <= v1.18.10, <= v1.17.13, < v1.20.0-alpha2. 

https://github.com/kubernetes/kubernetes/issues/95623

https://github.com/kubernetes/kubernetes/pull/95316/commits/f0f52255412cbc6834bd225a59608ebb4a0d399b

This also means 15sp2/kubernetes1.18 is affected as well.
Comment 16 Petr Gajdos 2022-09-26 14:35:45 UTC
15sp2/kubernetes1.18, 12/kubernetes submitted.

The maintainership of 12/kubernetes issue is still not resolved, though.
Comment 18 Petr Gajdos 2022-10-07 11:03:38 UTC
Reassigning back to coldpool, the fix causes a regression described in
bug 1204003.
Comment 20 Petr Gajdos 2022-10-10 13:01:07 UTC
I would hold the 12/kubernetes update, too.

It is interesting that the one-line fix from comment 0 is the only change between 1.18.10 and 1.18.20 in the round_trippers.go source. To be honest, I am not sure about the direction currently, perhaps we will have to try to reproduce the segfault manually in order to find the root cause of the regression.

I had got following hits from Martin Loviska:
https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/lib/publiccloud/utils.pm#L224
https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/containers/helm.pm
Comment 21 Petr Gajdos 2022-10-10 13:58:58 UTC
(In reply to Petr Gajdos from comment #20)
> I had got following hits from Martin Loviska:
> https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/lib/
> publiccloud/utils.pm#L224
> https://github.com/os-autoinst/os-autoinst-distri-opensuse/blob/master/tests/
> containers/helm.pm

s/hits/hints/ ;)
Comment 22 Petr Gajdos 2022-10-12 13:05:12 UTC
Not much progress. When 5.8.1 is installed on openQA machine, then it crashes. When it is downgraded to 5.5.1 it does not segfault. If it is then upgraded again to 5.8.1, it does not crash (neither version on create namespace commands).

The difference is that expiry and access-token entries are added in /root/.kube/config during successful run of 5.5.1 kubectl. If they are removed again (or just access-token), kubectl version (or create namespace) will start segfault again with 5.8.1.

Segfault happens somewhere here:

epoll_ctl(4, EPOLL_CTL_DEL, 3, 0xc000807d54) = -1 EPERM (Operation not permitted)
fstat(3, {st_mode=S_IFREG|0600, st_size=2775, ...}) = 0
read(3, "apiVersion: v1\nclusters:\n- clust"..., 3287) = 2775
read(3, "", 512)                        = 0
close(3)                                = 0
[SEGFAULT]
newfstatat(AT_FDCWD, "/root/.kube", {st_mode=S_IFDIR|0755, st_size=64, ...}, 0) = 0
openat(AT_FDCWD, "/root/.kube/config", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0600) = 3
epoll_ctl(4, EPOLL_CTL_ADD, 3, {events=EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, data={u32=512237320, u64=140317093797640}}) = -1 EPERM (Operation not permitted)
epoll_ctl(4, EPOLL_CTL_DEL, 3, 0xc000807f3c) = -1 EPERM (Operation not permitted)

I. e. before opening .kube/config for writing.
Comment 23 Petr Gajdos 2022-10-14 11:38:42 UTC
In addition to the comment 22, the situation could be even worse with 1.18.20 as it segfaults on 'kubectl version' with or without access-token and expiry entries of .kube/version, as far as I tested.
Comment 24 Petr Gajdos 2022-11-04 12:27:00 UTC
For 1.18.10-150200.5.8.4 (regressed version):

# rm -r .kube
# gcloud container clusters get-credentials qe-c-testing --zone europe-west4-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for qe-c-testing.
# kubectl create namespace helm-ns-10
unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f]

goroutine 1 [running]:
runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?})
	/usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0 sp=0xc00043cf70 pc=0x55a8c3cd797f
runtime.sigpanic()
[...]
# kubectl create namespace helm-ns-10
namespace/helm-ns-10 created
#

The difference between first and second kubectl call is in .kube dir, cache, config.lock and http-cache is present there after first call.

# rm -r .kube/{cache,config.lock,http-cache}
# kubectl create namespace helm-ns-11
[crash]
# kubectl create namespace helm-ns-11
namespace/helm-ns-11 created
#
Comment 25 Thomas Leroy 2022-11-15 10:29:17 UTC
(In reply to Petr Gajdos from comment #24)
> For 1.18.10-150200.5.8.4 (regressed version):
> 
> # rm -r .kube
> # gcloud container clusters get-credentials qe-c-testing --zone
> europe-west4-a
> Fetching cluster endpoint and auth data.
> kubeconfig entry generated for qe-c-testing.
> # kubectl create namespace helm-ns-10
> unexpected fault address 0x0
> fatal error: fault
> [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f]
> 
> goroutine 1 [running]:
> runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?})
> 	/usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0
> sp=0xc00043cf70 pc=0x55a8c3cd797f
> runtime.sigpanic()
> [...]
> # kubectl create namespace helm-ns-10
> namespace/helm-ns-10 created
> #
> 
> The difference between first and second kubectl call is in .kube dir, cache,
> config.lock and http-cache is present there after first call.
> 
> # rm -r .kube/{cache,config.lock,http-cache}
> # kubectl create namespace helm-ns-11
> [crash]
> # kubectl create namespace helm-ns-11
> namespace/helm-ns-11 created
> #

We decided to declare SUSE:SLE-12:Update/kubernetes as wontfix. Using kubectl with a verbosity lower than 9 (eg. `kubectl -v=8 ...`) mitigates the issue. Anyway, thank you for the efforts Petr :)
Comment 26 Petr Gajdos 2022-11-15 11:52:20 UTC
(In reply to Thomas Leroy from comment #25)
> (In reply to Petr Gajdos from comment #24)
> > For 1.18.10-150200.5.8.4 (regressed version):
> > 
> > # rm -r .kube
> > # gcloud container clusters get-credentials qe-c-testing --zone
> > europe-west4-a
> > Fetching cluster endpoint and auth data.
> > kubeconfig entry generated for qe-c-testing.
> > # kubectl create namespace helm-ns-10
> > unexpected fault address 0x0
> > fatal error: fault
> > [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f]
> > 
> > goroutine 1 [running]:
> > runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?})
> > 	/usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0
> > sp=0xc00043cf70 pc=0x55a8c3cd797f
> > runtime.sigpanic()
> > [...]
> > # kubectl create namespace helm-ns-10
> > namespace/helm-ns-10 created
> > #
> > 
> > The difference between first and second kubectl call is in .kube dir, cache,
> > config.lock and http-cache is present there after first call.
> > 
> > # rm -r .kube/{cache,config.lock,http-cache}
> > # kubectl create namespace helm-ns-11
> > [crash]
> > # kubectl create namespace helm-ns-11
> > namespace/helm-ns-11 created
> > #
> 
> We decided to declare SUSE:SLE-12:Update/kubernetes as wontfix. Using
> kubectl with a verbosity lower than 9 (eg. `kubectl -v=8 ...`) mitigates the
> issue. Anyway, thank you for the efforts Petr :)

Thank you. However, I am talking here about 15sp2/kubernetes1.18. 12/kubernetes has version 1.3.
Comment 27 Thomas Leroy 2022-11-15 12:21:42 UTC
(In reply to Petr Gajdos from comment #26)
> (In reply to Thomas Leroy from comment #25)
> > (In reply to Petr Gajdos from comment #24)
> > > For 1.18.10-150200.5.8.4 (regressed version):
> > > 
> > > # rm -r .kube
> > > # gcloud container clusters get-credentials qe-c-testing --zone
> > > europe-west4-a
> > > Fetching cluster endpoint and auth data.
> > > kubeconfig entry generated for qe-c-testing.
> > > # kubectl create namespace helm-ns-10
> > > unexpected fault address 0x0
> > > fatal error: fault
> > > [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x55a8c3d06f3f]
> > > 
> > > goroutine 1 [running]:
> > > runtime.throw({0x55a8c4dafc6e?, 0x55a8c40df39e?})
> > > 	/usr/lib64/go/1.19/src/runtime/panic.go:1047 +0x5f fp=0xc00043cfa0
> > > sp=0xc00043cf70 pc=0x55a8c3cd797f
> > > runtime.sigpanic()
> > > [...]
> > > # kubectl create namespace helm-ns-10
> > > namespace/helm-ns-10 created
> > > #
> > > 
> > > The difference between first and second kubectl call is in .kube dir, cache,
> > > config.lock and http-cache is present there after first call.
> > > 
> > > # rm -r .kube/{cache,config.lock,http-cache}
> > > # kubectl create namespace helm-ns-11
> > > [crash]
> > > # kubectl create namespace helm-ns-11
> > > namespace/helm-ns-11 created
> > > #
> > 
> > We decided to declare SUSE:SLE-12:Update/kubernetes as wontfix. Using
> > kubectl with a verbosity lower than 9 (eg. `kubectl -v=8 ...`) mitigates the
> > issue. Anyway, thank you for the efforts Petr :)
> 
> Thank you. However, I am talking here about 15sp2/kubernetes1.18.
> 12/kubernetes has version 1.3.

Indeed, sorry
Comment 28 Thomas Leroy 2022-11-15 12:31:39 UTC
Everything done, closing