From: Michael Ellerman Date: Tue, 8 Aug 2017 07:06:32 +0000 (+1000) Subject: powerpc/iommu: Avoid undefined right shift in iommu_range_alloc() X-Git-Tag: v4.14-rc1~119^2~217 X-Git-Url: https://asedeno.scripts.mit.edu/gitweb/?a=commitdiff_plain;h=63b85621d9aa6bdc410f01b22f7821cea3d7bdc6;p=linux.git powerpc/iommu: Avoid undefined right shift in iommu_range_alloc() In iommu_range_alloc() we generate a mask by right shifting ~0, however if the specified alignment is 0 then we right shift by 64, which is undefined. UBSAN tells us so: UBSAN: Undefined behaviour in ../arch/powerpc/kernel/iommu.c:193:35 shift exponent 64 is too large for 64-bit type 'long unsigned int' We can avoid it by instead generating the mask with: align_mask = (1ull << align_order) - 1; That will also generate an undefined shift if align_order is 64 or greater, but that shouldn't be a problem for a while. Signed-off-by: Michael Ellerman --- diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c index 0e49a4560cff..e0af6cd7ba4f 100644 --- a/arch/powerpc/kernel/iommu.c +++ b/arch/powerpc/kernel/iommu.c @@ -190,7 +190,7 @@ static unsigned long iommu_range_alloc(struct device *dev, unsigned int pool_nr; struct iommu_pool *pool; - align_mask = 0xffffffffffffffffl >> (64 - align_order); + align_mask = (1ull << align_order) - 1; /* This allocator was derived from x86_64's bit string search */