Commit 4d1a5b7b authored by Philippe Gerum's avatar Philippe Gerum Committed by Jan Kiszka

cobalt/memory: fix __vmalloc() calls

Since kernel v5.8, __vmalloc() does not take protection bits as
PROT_KERNEL is now wired in. Therefore we cannot disable the cache for
the UMM segment via the allocation call directly anymore.

This said, we don't support any CPU architecture exhibiting cache
aliasing braindamage anymore either (was armv4/v5), so let's convert
to the new __vmalloc() call format without bothering for cache
Signed-off-by: Philippe Gerum's avatarPhilippe Gerum <>
Signed-off-by: Jan Kiszka's avatarJan Kiszka <>
parent aacfaf8b
......@@ -28,6 +28,7 @@
#include <cobalt/kernel/heap.h>
#include <cobalt/kernel/vfile.h>
#include <cobalt/kernel/ancillaries.h>
#include <asm/xenomai/wrappers.h>
* @ingroup cobalt_core
......@@ -849,7 +850,7 @@ void *xnheap_vmalloc(size_t size)
* software on a 32bit system had to be wrong in the first
* place anyway.
return __vmalloc(size, GFP_KERNEL, PAGE_KERNEL);
return vmalloc_kernel(size, 0);
......@@ -166,4 +166,10 @@ devm_hwmon_device_register_with_groups(struct device *dev, const char *name,
#define __kernel_old_timeval timeval
#define vmalloc_kernel(__size, __flags) __vmalloc(__size, GFP_KERNEL|__flags, PAGE_KERNEL)
#define vmalloc_kernel(__size, __flags) __vmalloc(__size, GFP_KERNEL|__flags)
......@@ -320,10 +320,11 @@ int cobalt_umm_init(struct cobalt_umm *umm, u32 size,
/* We don't support CPUs with VIVT caches and the like. */
size = PAGE_ALIGN(size);
basemem = __vmalloc(size, GFP_KERNEL|__GFP_ZERO,
xnarch_cache_aliasing() ?
pgprot_noncached(PAGE_KERNEL) : PAGE_KERNEL);
basemem = vmalloc_kernel(size, __GFP_ZERO);
if (basemem == NULL)
return -ENOMEM;
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment