Skip to content
Snippets Groups Projects
  1. Oct 30, 2023
  2. Oct 18, 2023
  3. Sep 24, 2023
  4. Sep 14, 2023
  5. Aug 25, 2023
  6. Aug 19, 2023
  7. Aug 14, 2023
  8. Aug 08, 2023
  9. Aug 07, 2023
  10. Aug 03, 2023
  11. Jul 17, 2023
  12. Jul 16, 2023
  13. Jul 14, 2023
  14. Jun 12, 2023
  15. Jun 06, 2023
    • Ben Dooks's avatar
      ubifs: allow loading to above 4GiB · b46cec41
      Ben Dooks authored and Heiko Schocher's avatar Heiko Schocher committed
      
      The ubifsload command is truncating any address above 4GiB as it casts
      this address to an u32, instead of using an unsigned long which most of
      the other load commands do. Change this to an unsigned long to allow
      loading into high memory for boards which use these areas.
      
      Fixes the following error:
      
      => ubifsload 0x2100000000 /boot/Image.lzma
      Loading file '/boot/Image.lzma' to addr 0x00000000...
      Unhandled exception: Store/AMO access fault
      
      Signed-off-by: default avatarBen Dooks <ben.dooks@sifive.com>
      Signed-off-by: default avatarBen Dooks <ben.dooks@codethink.co.uk>
      b46cec41
  16. May 31, 2023
  17. May 08, 2023
    • Dominique Martinet's avatar
      btrfs: fix offset when reading compressed extents · b1d3013d
      Dominique Martinet authored and Tom Rini's avatar Tom Rini committed
      
      btrfs_read_extent_reg correctly computed the extent offset in the
      BTRFS_COMPRESS_NONE case, but did not account for the 'offset - key.offset'
      part correctly in the compressed case, making the function read
      incorrect data.
      
      In the case I examined, the last 4k of a file was corrupted and
      contained data from a few blocks prior, e.g. reading a 10k file with a
      single extent:
      btrfs_file_read()
       -> btrfs_read_extent_reg
          (aligned part loop, until 8k)
       -> read_and_truncate_page
         -> btrfs_read_extent_reg
            (re-reads the last extent from 8k to the end,
            incorrectly reading the first 2k of data)
      
      This can be reproduced as follow:
      $ truncate -s 200M btr
      $ mount btr -o compress /mnt
      $ pat() { dd if=/dev/zero bs=1M count=$1 iflag=count_bytes status=none | tr '\0' "\\$2"; }
      $ { pat 4K 1; pat 4K 2; pat 2K 3; }  > /mnt/file
      $ sync
      $ filefrag -v /mnt/file
      File size of /mnt/file is 10240 (3 blocks of 4096 bytes)
       ext:     logical_offset:        physical_offset: length:   expected: flags:
         0:        0..       2:       3328..      3330:      3:             last,encoded,eof
      $ umount /mnt
      
      Then in u-boot:
      => load scsi 0 2000000 file
      10240 bytes read in 3 ms (3.3 MiB/s)
      => md 2001ff0
      02001ff0: 02020202 02020202 02020202 02020202  ................
      02002000: 01010101 01010101 01010101 01010101  ................
      02002010: 01010101 01010101 01010101 01010101  ................
      
      (02002000 onwards should contain '03' pattern but went back to 01,
      start of the extent)
      
      After patch, data is read properly:
      => md 2001ff0
      02001ff0: 02020202 02020202 02020202 02020202  ................
      02002000: 03030303 03030303 03030303 03030303  ................
      02002010: 03030303 03030303 03030303 03030303  ................
      
      Note that the code previously (before commit e3427184 ("fs: btrfs:
      Implement btrfs_file_read()")) did not split that read in two, so
      this is a regression even if the previous code might not have been
      handling offsets correctly either (something that booted now fails to
      boot)
      
      Fixes: a26a6bed ("fs: btrfs: Introduce btrfs_read_extent_inline() and btrfs_read_extent_reg()")
      Signed-off-by: default avatarDominique Martinet <dominique.martinet@atmark-techno.com>
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      b1d3013d
  18. Apr 25, 2023
  19. Mar 30, 2023
  20. Mar 22, 2023
  21. Feb 23, 2023
    • Qu Wenruo's avatar
      fs: btrfs: limit the mapped length to the original length · 511a1303
      Qu Wenruo authored and Tom Rini's avatar Tom Rini committed
      
      [BUG]
      There is a bug report that btrfs driver caused hang during file read:
      
        This breaks btrfs on the HiFive Unmatched.
      
        => pci enum
        PCIE-0: Link up (Gen1-x8, Bus0)
        => nvme scan
        => load nvme 0:2 0x8c000000 /boot/dtb/sifive/hifive-unmatched-a00.dtb
        [hangs]
      
      [CAUSE]
      The reporter provided some debug output:
      
        read_extent_data: cur=615817216, orig_len=16384, cur_len=16384
        read_extent_data: btrfs_map_block: cur_len=479944704; ret=0
        read_extent_data: ret=0
        read_extent_data: cur=615833600, orig_len=4096, cur_len=4096
        read_extent_data: btrfs_map_block: cur_len=479928320; ret=0
      
      Note the second and the last line, the @cur_len is 450+MiB, which is
      almost a chunk size.
      
      And inside __btrfs_map_block(), we limits the returned value to stripe
      length, but that's depending on the chunk type:
      
      	if (map->type & (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1 |
      			 BTRFS_BLOCK_GROUP_RAID1C3 | BTRFS_BLOCK_GROUP_RAID1C4 |
      			 BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6 |
      			 BTRFS_BLOCK_GROUP_RAID10 |
      			 BTRFS_BLOCK_GROUP_DUP)) {
      		/* we limit the length of each bio to what fits in a stripe */
      		*length = min_t(u64, ce->size - offset,
      			      map->stripe_len - stripe_offset);
      	} else {
      		*length = ce->size - offset;
      	}
      
      This means, if the chunk is SINGLE profile, then we don't limit the
      returned length at all, and even for other profiles, we can still return
      a length much larger than the requested one.
      
      [FIX]
      Properly clamp the returned length, preventing it from returning a much
      larger range than expected.
      
      Reported-by: default avatarAndreas Schwab <schwab@linux-m68k.org>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      511a1303
  22. Feb 10, 2023
  23. Feb 06, 2023
  24. Jan 20, 2023
  25. Jan 19, 2023
    • Brandon Maier's avatar
      lib: zstd: update to latest Linux zstd 1.5.2 · 4b9b25d9
      Brandon Maier authored and Tom Rini's avatar Tom Rini committed
      Update the zstd implementation to match Linux zstd 1.5.2 from commit
      2aa14b1ab2.
      
      This was motivated by running into decompression corruption issues when
      trying to uncompress files compressed with newer versions of zstd. zstd
      users also claim significantly improved decompression times with newer
      zstd versions which is a side benefit.
      
      Original zstd code was copied from Linux commit 2aa14b1ab2 which is a
      custom-built implementation based on zstd 1.3.1. Linux switched to an
      implementation that is a copy of the upstream zstd code in Linux commit
      e0c1b49f5b, this results in a large code diff. However this should make
      future updates easier along with other benefits[1].
      
      This commit is a straight mirror of the Linux zstd code, except to:
      - update a few #include that do not translate cleanly
        - linux/swab.h -> asm/byteorder.h
        - linux/limits.h -> linux/kernel.h
        - linux/module.h -> linux/compat.h
      - remove assert() from debug.h so it doesn't conflict with u-boot's
        assert()
      - strip out the compressor code as was done in the previous u-boot zstd
      - update existing zstd users to the new Linux zstd API
      - change the #define for MEM_STATIC to use INLINE_KEYWORD for codesize
      - add a new KConfig option that sets zstd build options to minify code
        based on zstd's ZSTD_LIB_MINIFY[2].
      
      These changes were tested by booting a zstd 1.5.2 compressed kernel inside a
      FIT. And the squashfs changes by loading a file from zstd compressed squashfs
      with sqfsload. buildman was used to compile test other boards and check for
      binary bloat, as follows:
      
      > $ buildman -b zstd2 --boards dh_imx6,m53menlo,mvebu_espressobin-88f3720,sandbox,sandbox64,stm32mp15_dhcom_basic,stm32mp15_dhcor_basic,turris_mox,turris_omnia -sS
      > Summary of 6 commits for 9 boards (8 threads, 1 job per thread)
      > 01: Merge branch '2023-01-10-platform-updates'
      >        arm:  w+   m53menlo dh_imx6
      > 02: lib: zstd: update to latest Linux zstd 1.5.2
      >    aarch64: (for 2/2 boards) all -3186.0 rodata +920.0 text -4106.0
      >        arm: (for 5/5 boards) all +1254.4 rodata +940.0 text +314.4
      >    sandbox: (for 2/2 boards) all -4452.0 data -16.0 rodata +640.0 text -5076.0
      
      [1] https://github.com/torvalds/linux/commit/e0c1b49f5b674cca7b10549c53b3791d0bbc90a8
      [2] https://github.com/facebook/zstd/blob/f302ad8811643c428c4e3498e28f53a0578020d3/lib/libzstd.mk#L31
      
      
      
      Signed-off-by: default avatarBrandon Maier <brandon.maier@collins.com>
      [trini: Set ret to -EINVAL for the error of "failed to detect
      compressed" to fix warning, drop ZSTD_SRCSIZEHINT_MAX for non-Linux host
      tool builds]
      Signed-off-by: default avatarTom Rini <trini@konsulko.com>
      4b9b25d9
  26. Jan 11, 2023
    • Qu Wenruo's avatar
      fs/btrfs: handle data extents, which crosss stripe boundaries, correctly · 11d56701
      Qu Wenruo authored and Tom Rini's avatar Tom Rini committed
      
      [BUG]
      Since btrfs supports single device RAID0 at mkfs time after btrfs-progs
      v5.14, if we create a single device raid0 btrfs, and created a file
      crossing stripe boundary:
      
        # mkfs.btrfs -m dup -d raid0 test.img
        # mount test.img mnt
        # xfs_io -f -c "pwrite 0 128K" mnt/file
        # umount mnt
      
      Since btrfs is using 64K as stripe length, above 128K data write is
      definitely going to cross at least one stripe boundary.
      
      Then u-boot would fail to read above 128K file:
      
       => host bind 0 /home/adam/test.img
       => ls host 0
       <   >     131072  Fri Dec 30 00:18:25 2022  file
       => load host 0 0 file
       BTRFS: An error occurred while reading file file
       Failed to load 'file'
      
      [CAUSE]
      Unlike tree blocks read, data extent reads doesn't consider cases in which
      one data extent can cross stripe boundary.
      
      In read_data_extent(), we just call btrfs_map_block() once and read the
      first mapped range.
      
      And if the first mapped range is smaller than the desired range, it
      would return error.
      
      But since even single device btrfs can utilize RAID0 profiles, the first
      mapped range can only be at most 64K for RAID0 profiles, and cause false
      error.
      
      [FIX]
      Just like read_whole_eb(), we should call btrfs_map_block() in a loop
      until we read all data.
      
      Since we're here, also add extra error messages for the following cases:
      
      - btrfs_map_block() failure
        We already have the error message for it.
      
      - Missing device
        This should not happen, as we only support single device for now.
      
      - __btrfs_devread() failure
      
      With this bug fixed, btrfs driver of u-boot can properly read the above
      128K file, and have the correct content:
      
       => host bind 0 /home/adam/test.img
       => ls host 0
       <   >     131072  Fri Dec 30 00:18:25 2022  file
       => load host 0 0 file
       131072 bytes read in 0 ms
       => md5sum 0 0x20000
       md5 for 00000000 ... 0001ffff ==> d48858312a922db7eb86377f638dbc9f
       ^^^ Above md5sum also matches.
      
      Reported-by: default avatarSam Winchenbach <swichenbach@tethers.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      11d56701
    • David Oberhollenzer's avatar
      fs/squashfs: Only use export table if available · bf48dde8
      David Oberhollenzer authored and Tom Rini's avatar Tom Rini committed
      
      For a squashfs filesystem, the fragment table is followed by
      the following tables: NFS export table, ID table, xattr table.
      
      The export and xattr tables are both completely optional, but
      the ID table is mandatory. The Linux implementation refuses to
      mount the image if the ID table is missing. Tables that are no
      present have their location in the super block set
      to 0xFFFFFFFFFFFFFFFF.
      
      The u-boot implementation previously assumed that it can always
      rely on the export table location as an upper bound for the fragment
      table, trying (and failing) to read past filesystem bounds if it
      is not present.
      
      This patch changes the driver to use the ID table instead and only
      use the export table location if it lies between the two.
      
      Signed-off-by: default avatarDavid Oberhollenzer <goliath@infraroot.at>
      Reviewed-by: default avatarMiquel Raynal <miquel.raynal@bootlin.com>
      bf48dde8
  27. Dec 08, 2022
  28. Nov 23, 2022
  29. Nov 10, 2022
  30. Oct 18, 2022
  31. Sep 29, 2022
Loading