summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2025-11-28net: dsa: tag_trailer: use the dsa_xmit_port_mask() helperVladimir Oltean
The "trailer" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-14-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_rzn1_a5psw: use the dsa_xmit_port_mask() helperVladimir Oltean
The "a5psw" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Cc: "Clément Léger" <clement.leger@bootlin.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-13-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_rtl8_4: use the dsa_xmit_port_mask() helperVladimir Oltean
The "rtl8_4" and "rtl8_4t" tagging protocols populate a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Cc: Linus Walleij <linus.walleij@linaro.org> Cc: "Alvin Šipraga" <alsi@bang-olufsen.dk> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-12-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_rtl4_a: use the dsa_xmit_port_mask() helperVladimir Oltean
The "rtl4a" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Cc: Linus Walleij <linus.walleij@linaro.org> Cc: "Alvin Šipraga" <alsi@bang-olufsen.dk> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-11-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_qca: use the dsa_xmit_port_mask() helperVladimir Oltean
The "qca" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-10-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_ocelot: use the dsa_xmit_port_mask() helperVladimir Oltean
The "ocelot" and "seville" tagging protocols populate a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. This protocol used BIT_ULL() rather than simple BIT() to silence Smatch, as explained in commit 1f778d500df3 ("net: mscc: ocelot: avoid type promotion when calling ocelot_ifh_set_dest"). I would expect that this tool no longer complains now, when the BIT(dp->index) is hidden inside the dsa_xmit_port_mask() function, the return value of which is promoted to u64. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-9-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_mxl_gsw1xx: use the dsa_xmit_port_mask() helperVladimir Oltean
The "gsw1xx" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Cc: Hauke Mehrtens <hauke@hauke-m.de> Cc: Daniel Golle <daniel@makrotopia.org> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-8-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_mtk: use the dsa_xmit_port_mask() helperVladimir Oltean
The "mtk" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Cc: Chester A. Unal" <chester.a.unal@arinc9.com> Cc: Daniel Golle <daniel@makrotopia.org> Cc: DENG Qingfang <dqfext@gmail.com> Cc: Sean Wang <sean.wang@mediatek.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-7-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_ksz: use the dsa_xmit_port_mask() helperVladimir Oltean
The "ksz8795", "ksz9893", "ksz9477" and "lan937x" tagging protocols populate a bit mask for the TX ports. Unlike the others, "ksz9477" also accelerates HSR packet duplication. Make the HSR duplication logic available generically to all 4 taggers by using the dsa_xmit_port_mask() function to set the TX port mask. Cc: Woojung Huh <woojung.huh@microchip.com> Cc: UNGLinuxDriver@microchip.com Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-6-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_hellcreek: use the dsa_xmit_port_mask() helperVladimir Oltean
The "hellcreek" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Cc: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-5-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_gswip: use the dsa_xmit_port_mask() helperVladimir Oltean
The "gswip" tagging protocol populates a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. Cc: Hauke Mehrtens <hauke@hauke-m.de> Cc: Daniel Golle <daniel@makrotopia.org> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-4-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: tag_brcm: use the dsa_xmit_port_mask() helperVladimir Oltean
The "brcm" and "brcm-prepend" tagging protocols populate a bit mask for the TX ports, so we can use dsa_xmit_port_mask() to centralize the decision of how to set that field. The port mask is written u8 by u8, first the high octet and then the low octet. Cc: Florian Fainelli <florian.fainelli@broadcom.com> Cc: Jonas Gorski <jonas.gorski@gmail.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-3-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: dsa: introduce the dsa_xmit_port_mask() tagging protocol helperVladimir Oltean
Many tagging protocols deal with the transmit port mask being a bit mask, and set it to BIT(dp->index). Not a big deal. Also, some tagging protocols are written for switches which support HSR offload (including packet duplication offload), there we see a walk using dsa_hsr_foreach_port() to find the other port in the same switch that's member of the HSR, and set that bit in the port mask too. That isn't sufficiently interesting either, until you come to realize that there isn't anything special in the second case that switches just in the first one can't do too. It just becomes a matter of "is it wise to do it? are sufficient people using HSR/PRP with generic off-the-shelf switches to justify add an extra test in the data path?" - the answer to which is probably "it depends". It isn't _much_ worse to not have HSR offload at all, so as to make it impractical, esp. with a rich OS like Linux. But the HSR users are rather specialized in industrial networking. Anyway, the change acts on the premise that we're going to have support for this, it should be uniformly implemented for everyone, and that if we find some sort of balance, we can keep everyone relatively happy. So I've disabled that logic if CONFIG_HSR isn't enabled, and I've tilted the branch predictor to say it's unlikely we're transmitting through a port with this capability currently active. On branch miss, we're still going to save the transmission of one packet, so there's some remaining benefit there too. I don't _think_ we need to jump to static keys yet. The helper returns a 32-bit zero-based unsigned number, that callers have to transpose using FIELD_PREP(). It is not the first time we assume DSA switches won't be larger than 32 ports - dsa_user_ports() has that assumption baked into it too. One last development note about why pass the "skb" argument when this isn't used. Looking at the compiled code on arm64, which is identical both with and without it, the answer is "why not?" - who knows what other features dependent on the skb may be handled in the future. Link: https://lore.kernel.org/netdev/20251126093240.2853294-4-mmyangfl@gmail.com/ Cc: "Alvin Šipraga" <alsi@bang-olufsen.dk> Cc: Chester A. Unal" <chester.a.unal@arinc9.com> Cc: "Clément Léger" <clement.leger@bootlin.com> Cc: Daniel Golle <daniel@makrotopia.org> Cc: David Yang <mmyangfl@gmail.com> Cc: DENG Qingfang <dqfext@gmail.com> Cc: Florian Fainelli <florian.fainelli@broadcom.com> Cc: George McCollister <george.mccollister@gmail.com> Cc: Hauke Mehrtens <hauke@hauke-m.de> Cc: Jonas Gorski <jonas.gorski@gmail.com> Cc: Kurt Kanzenbach <kurt@linutronix.de> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Sean Wang <sean.wang@mediatek.com> Cc: UNGLinuxDriver@microchip.com Cc: Woojung Huh <woojung.huh@microchip.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://patch.msgid.link/20251127120902.292555-2-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28Merge branch 'net-broadcom-migrate-to-get_rx_ring_count-ethtool-callback'Jakub Kicinski
Breno Leitao says: ==================== net: broadcom: migrate to .get_rx_ring_count() ethtool callback This series migrates Broadcom ethernet drivers to use the new .get_rx_ring_count() ethtool callback introduced in commit 84eaf4359c36 ("net: ethtool: add get_rx_ring_count callback to optimize RX ring queries"). This change simplifies the .get_rxnfc() implementation by extracting the ETHTOOL_GRXRINGS case handling into a dedicated callback, making the code cleaner and aligning these drivers with the updated ethtool API. The series covers two Broadcom drivers: bnxt and bcmgenet. Each patch removes the ETHTOOL_GRXRINGS case from the driver's .get_rxnfc() switch statement and implements the new .get_rx_ring_count() callback that returns the number of RX rings. ==================== Link: https://patch.msgid.link/20251127-grxrings_broadcom-v1-0-b0b182864950@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: bcmgenet: extract GRXRINGS from .get_rxnfcBreno Leitao
Commit 84eaf4359c36 ("net: ethtool: add get_rx_ring_count callback to optimize RX ring queries") added specific support for GRXRINGS callback, simplifying .get_rxnfc. Remove the handling of GRXRINGS in .get_rxnfc() by moving it to the new .get_rx_ring_count(). This simplifies the RX ring count retrieval and aligns bcmgenet with the new ethtool API for querying RX ring parameters. Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://patch.msgid.link/20251127-grxrings_broadcom-v1-2-b0b182864950@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: bnxt: extract GRXRINGS from .get_rxnfcBreno Leitao
Commit 84eaf4359c36 ("net: ethtool: add get_rx_ring_count callback to optimize RX ring queries") added specific support for GRXRINGS callback, simplifying .get_rxnfc. Remove the handling of GRXRINGS in .get_rxnfc() by moving it to the new .get_rx_ring_count(). This simplifies the RX ring count retrieval and aligns bnxt with the new ethtool API for querying RX ring parameters. Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://patch.msgid.link/20251127-grxrings_broadcom-v1-1-b0b182864950@debian.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28Merge branch 'tools-ynl-add-schema-checking'Jakub Kicinski
Donald Hunter says: ==================== tools: ynl: add schema checking Add schema checking and yaml linting for the YNL specs. Patch 1 adds a schema_check make target using a pyynl --validate option Patch 2 adds a lint make target using yamllint Patches 3,4 fix issues reported by make -C tools/net/ynl lint schema_check ==================== Link: https://patch.msgid.link/20251127123502.89142-1-donald.hunter@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28ynl: fix schema check errorsDonald Hunter
Fix two schema check errors that have lurked since the attribute name validation was made more strict: not ok 2 conntrack.yaml schema validation 'labels mask' does not match '^[0-9a-z-]+$' not ok 13 nftables.yaml schema validation 'set id' does not match '^[0-9a-z-]+$' Signed-off-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20251127123502.89142-5-donald.hunter@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28ynl: fix a yamllint warning in ethtool specDonald Hunter
Fix warning reported by yamllint: ../../../Documentation/netlink/specs/ethtool.yaml 1272:21 warning truthy value should be one of [false, true] (truthy) Signed-off-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20251127123502.89142-4-donald.hunter@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28tools: ynl: add a lint makefile targetDonald Hunter
Add a lint target to run yamllint on the YNL specs. make -C tools/net/ynl lint make: Entering directory '/home/donaldh/net-next/tools/net/ynl' yamllint ../../../Documentation/netlink/specs/*.yaml ../../../Documentation/netlink/specs/ethtool.yaml 1272:21 warning truthy value should be one of [false, true] (truthy) make: Leaving directory '/home/donaldh/net-next/tools/net/ynl' Signed-off-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20251127123502.89142-3-donald.hunter@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28tools: ynl: add schema checkingDonald Hunter
Add a --validate flag to pyynl for explicit schema check with error reporting and add a schema_check make target to check all YNL specs. make -C tools/net/ynl schema_check make: Entering directory '/home/donaldh/net-next/tools/net/ynl' ok 1 binder.yaml schema validation not ok 2 conntrack.yaml schema validation 'labels mask' does not match '^[0-9a-z-]+$' Failed validating 'pattern' in schema['properties']['attribute-sets']['items']['properties']['attributes']['items']['properties']['name']: {'type': 'string', 'pattern': '^[0-9a-z-]+$'} On instance['attribute-sets'][14]['attributes'][22]['name']: 'labels mask' ok 3 devlink.yaml schema validation [...] Signed-off-by: Donald Hunter <donald.hunter@gmail.com> Link: https://patch.msgid.link/20251127123502.89142-2-donald.hunter@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: phy: aquantia: check for NVMEM deferralRobert Marko
Currently, if NVMEM provider is probed later than Aquantia, loading the firmware will fail with -EINVAL. To fix this, simply check for -EPROBE_DEFER when NVMEM is attempted and return it. Fixes: e93984ebc1c8 ("net: phy: aquantia: add firmware load support") Signed-off-by: Robert Marko <robimarko@gmail.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/20251127114514.460924-1-robimarko@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28ext4: mark inodes without acls in __ext4_iget()Jan Kara
Mark inodes without acls with cache_no_acl() in __ext4_iget() so that path lookup can run in RCU mode from the start. This is interesting in particular for the case where the file owner does the lookup because in that case end up constantly hitting the slow path otherwise. We drop out from the fast path (because ACL state is unknown) but never end up calling check_acl() to cache ACL state. The problem was originally analyzed by Linus and fix tested by Matheusz, I'm just putting it into mergeable form :). Link: https://lore.kernel.org/all/CAHk-=whSzc75TLLPWskV0xuaHR4tpWBr=LduqhcCFr4kCmme_w@mail.gmail.com Reported-by: Mateusz Guzik <mjguzik@gmail.com> Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jan Kara <jack@suse.cz> Reviewed-by: Baokun Li <libaokun1@huawei.com> Message-ID: <20251125101340.24276-2-jack@suse.cz> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: enable block size larger than page sizeBaokun Li
Since block device (See commit 3c20917120ce ("block/bdev: enable large folio support for large logical block sizes")) and page cache (See commit ab95d23bab220ef8 ("filemap: allocate mapping_min_order folios in the page cache")) has the ability to have a minimum order when allocating folio, and ext4 has supported large folio in commit 7ac67301e82f ("ext4: enable large folio for regular file"), now add support for block_size > PAGE_SIZE in ext4. set_blocksize() -> bdev_validate_blocksize() already validates the block size, so ext4_load_super() does not need to perform additional checks. Here we only need to add the FS_LBS bit to fs_flags. In addition, block sizes larger than the page size are currently supported only when CONFIG_TRANSPARENT_HUGEPAGE is enabled. To make this explicit, a blocksize_gt_pagesize entry has been added under /sys/fs/ext4/feature/, indicating whether bs > ps is supported. This allows mke2fs to check the interface and determine whether a warning should be issued when formatting a filesystem with block size larger than the page size. Suggested-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Pankaj Raghav <p.raghav@samsung.com> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-25-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: add checks for large folio incompatibilities when BS > PSBaokun Li
Supporting a block size greater than the page size (BS > PS) requires support for large folios. However, several features (e.g., encrypt) do not yet support large folios. To prevent conflicts, this patch adds checks at mount time to prohibit these features from being used when BS > PS. Since these features cannot be changed on remount, there is no need to check on remount. This patch adds s_max_folio_order, initialized during mount according to filesystem features and mount options. If s_max_folio_order is 0, large folios are disabled. With this in place, ext4_set_inode_mapping_order() can be simplified by checking s_max_folio_order, avoiding redundant checks. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-24-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support verifying data from large folios with fs-verityBaokun Li
Eric Biggers already added support for verifying data from large folios several years ago in commit 5d0f0e57ed90 ("fsverity: support verifying data from large folios"). With ext4 now supporting large block sizes, the fs-verity tests `kvm-xfstests -c ext4/64k -g verity -x encrypt` pass without issues. Therefore, remove the restriction and allow large folios to be enabled together with fs-verity. Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-23-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: make data=journal support large block sizeBaokun Li
Currently, ext4_set_inode_mapping_order() does not set max folio order for files with the data journalling flag. For files that already have large folios enabled, ext4_inode_journal_mode() ignores the data journalling flag once max folio order is set. This is not because data journalling cannot work with large folios, but because credit estimates will go through the roof if there are too many blocks per folio. Since the real constraint is blocks-per-folio, to support data=journal under LBS, we now set max folio order to be equal to min folio order for files with the journalling flag. When LBS is disabled, the max folio order remains unset as before. Therefore, before ext4_change_inode_journal_flag() switches the journalling mode, we call truncate_pagecache() to drop all page cache for that inode, and filemap_write_and_wait() is called unconditionally. After that, once the journalling mode has been switched, we can safely reset the inode mapping order, and the mapping_large_folio_support() check in ext4_inode_journal_mode() can be removed. Suggested-by: Jan Kara <jack@suse.cz> Suggested-by: Dan Carpenter <dan.carpenter@linaro.org> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-22-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in __ext4_block_zero_page_range()Zhihao Cheng
Use the EXT4_PG_TO_LBLK() macro to convert folio indexes to blocks to avoid negative left shifts after supporting blocksize greater than PAGE_SIZE. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-21-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in mpage_prepare_extent_to_map()Baokun Li
Use the EXT4_PG_TO_LBLK/EXT4_LBLK_TO_PG macros to complete the conversion between folio indexes and blocks to avoid negative left/right shifts after supporting blocksize greater than PAGE_SIZE. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-20-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in mpage_map_and_submit_buffers()Baokun Li
Use the EXT4_PG_TO_LBLK/EXT4_LBLK_TO_PG macros to complete the conversion between folio indexes and blocks to avoid negative left/right shifts after supporting blocksize greater than PAGE_SIZE. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-19-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in ext4_block_write_begin()Baokun Li
Use the EXT4_PG_TO_LBLK() macro to convert folio indexes to blocks to avoid negative left shifts after supporting blocksize greater than PAGE_SIZE. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-18-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in ext4_mpage_readpages()Baokun Li
Use the EXT4_PG_TO_LBLK() macro to convert folio indexes to blocks to avoid negative left shifts after supporting blocksize greater than PAGE_SIZE. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-17-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: rename 'page' references to 'folio' in multi-block allocatorZhihao Cheng
The ext4 multi-block allocator now fully supports folio objects. Update all variable names, function names, and comments to replace legacy 'page' terminology with 'folio', improving clarity and consistency. No functional changes. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-16-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: prepare buddy cache inode for BS > PS with large foliosBaokun Li
We use EXT4_BAD_INO for the buddy cache inode number. This inode is not accessed via __ext4_new_inode() or __ext4_iget(), meaning ext4_set_inode_mapping_order() is not called to set its folio order range. However, future block size greater than page size support requires this inode to support large folios, and the buddy cache code already handles BS > PS. Therefore, ext4_set_inode_mapping_order() is now explicitly called for this specific inode to set its folio order range. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-15-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in ext4_mb_init_cache()Baokun Li
Currently, ext4_mb_init_cache() uses blocks_per_page to calculate the folio index and offset. However, when blocksize is larger than PAGE_SIZE, blocks_per_page becomes zero, leading to a potential division-by-zero bug. Since we now have the folio, we know its exact size. This allows us to convert {blocks, groups}_per_page to {blocks, groups}_per_folio, thus supporting block sizes greater than page size. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-14-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in ext4_mb_get_buddy_page_lock()Baokun Li
Currently, ext4_mb_get_buddy_page_lock() uses blocks_per_page to calculate folio index and offset. However, when blocksize is larger than PAGE_SIZE, blocks_per_page becomes zero, leading to a potential division-by-zero bug. To support BS > PS, use bytes to compute folio index and offset within folio to get rid of blocks_per_page. Also, since ext4_mb_get_buddy_page_lock() already fully supports folio, rename it to ext4_mb_get_buddy_folio_lock(). Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-13-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in ext4_mb_load_buddy_gfp()Baokun Li
Currently, ext4_mb_load_buddy_gfp() uses blocks_per_page to calculate the folio index and offset. However, when blocksize is larger than PAGE_SIZE, blocks_per_page becomes zero, leading to a potential division-by-zero bug. To support BS > PS, use bytes to compute folio index and offset within folio to get rid of blocks_per_page. Also, if buddy and bitmap land in the same folio, we get that folio’s ref instead of looking it up again before updating the buddy. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-12-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: add EXT4_LBLK_TO_PG and EXT4_PG_TO_LBLK for block/page conversionBaokun Li
As BS > PS support is coming, all block number to page index (and vice-versa) conversions must now go via bytes. Added EXT4_LBLK_TO_PG() and EXT4_PG_TO_LBLK() macros to simplify these conversions and handle both BS <= PS and BS > PS scenarios cleanly. Suggested-by: Jan Kara <jack@suse.cz> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-11-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: add EXT4_LBLK_TO_B macro for logical block to bytes conversionBaokun Li
No functional changes. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-10-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in ext4_readdir()Baokun Li
In ext4_readdir(), page_cache_sync_readahead() is used to readahead mapped physical blocks. With LBS support, this can lead to a negative right shift. To fix this, the page index is now calculated by first converting the physical block number (pblk) to a file position (pos) before converting it to a page index. Also, the correct number of pages to readahead is now passed. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Pankaj Raghav <p.raghav@samsung.com> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-9-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: support large block size in ext4_calculate_overhead()Baokun Li
ext4_calculate_overhead() used a single page for its bitmap buffer, which worked fine when PAGE_SIZE >= block size. However, with block size greater than page size (BS > PS) support, the bitmap can exceed a single page. To address this, we now use kvmalloc() to allocate memory of the filesystem block size, to properly support BS > PS. Suggested-by: Jan Kara <jack@suse.cz> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-8-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: introduce s_min_folio_order for future BS > PS supportBaokun Li
This commit introduces the s_min_folio_order field to the ext4_sb_info structure. This field will store the minimum folio order required by the current filesystem, laying groundwork for future support of block sizes greater than PAGE_SIZE. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Pankaj Raghav <p.raghav@samsung.com> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-7-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: enable DIOREAD_NOLOCK by default for BS > PS as wellBaokun Li
The dioread_nolock related processes already support large folio, so dioread_nolock is enabled by default regardless of whether the blocksize is less than, equal to, or greater than PAGE_SIZE. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-6-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: make ext4_punch_hole() support large block sizeBaokun Li
When preparing for bs > ps support, clean up unnecessary PAGE_SIZE references in ext4_punch_hole(). Previously, when a hole extended beyond i_size, we aligned the hole end upwards to PAGE_SIZE to handle partial folio invalidation. Now that truncate_inode_pages_range() already handles partial folio invalidation correctly, this alignment is no longer required. However, to save pointless tail block zeroing, we still keep rounding up to the block size here. In addition, as Honza pointed out, when the hole end equals i_size, it should also be rounded up to the block size. This patch fixes that as well. Suggested-by: Jan Kara <jack@suse.cz> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-5-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: remove PAGE_SIZE checks for rec_len conversionBaokun Li
Previously, ext4_rec_len_(to|from)_disk only performed complex rec_len conversions when PAGE_SIZE >= 65536 to reduce complexity. However, we are soon to support file system block sizes greater than page size, which makes these conditional checks unnecessary. Thus, these checks are now removed. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-4-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: remove page offset calculation in ext4_block_truncate_page()Baokun Li
For bs <= ps scenarios, calculating the offset within the block is sufficient. For bs > ps, an initial page offset calculation can lead to incorrect behavior. Thus this redundant calculation has been removed. Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-3-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28ext4: remove page offset calculation in ext4_block_zero_page_range()Zhihao Cheng
For bs <= ps scenarios, calculating the offset within the block is sufficient. For bs > ps, an initial page offset calculation can lead to incorrect behavior. Thus this redundant calculation has been removed. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Baokun Li <libaokun1@huawei.com> Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Message-ID: <20251121090654.631996-2-libaokun@huaweicloud.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2025-11-28Merge tag 'wireless-next-2025-11-27' of ↵Jakub Kicinski
https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next Johannes Berg says: ==================== Apart from the usual small things just driver updates: - mt76: - WED support for >32-bit DMA - airoha NPU support - regdomain improvements - continued WiFi7/MLO work - rtw89 - support USB devices RTL8852AU and RTL8852CU - initial work for RTL8922DE - improved injection support - rtl8xxxu: 40 MHz connection fixes/support - brcmfmac: Acer A1 840 tablet quirk * tag 'wireless-next-2025-11-27' of https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (152 commits) wifi: mac80211: allow sharing identical chanctx for S1G interfaces wifi: nl80211: vendor-cmd: intel: fix a blank kernel-doc line warning wifi: cfg80211: include s1g_primary_2mhz when comparing chandefs wifi: cfg80211: include s1g_primary_2mhz when sending chandef wifi: ieee80211: correct FILS status codes mt76: mt7615: Fix memory leak in mt7615_mcu_wtbl_sta_add() wifi: mt76: mt792x: fix wifi init fail by setting MCU_RUNNING after CLC load wifi: mt76: Strip whitespace from build ddate wifi: mt76: mt7996: Add missing locking in mt7996_mac_sta_rc_work() wifi: mt76: mt7996: skip ieee80211_iter_keys() on scanning link remove wifi: mt76: mt7996: skip deflink accounting for offchannel links wifi: mt76: Move mt76_abort_scan out of mt76_reset_device() wifi: mt76: mt7996: move mt7996_update_beacons under mt76 mutex wifi: mt76: mt7996: grab mt76 mutex in mt7996_mac_sta_event() wifi: mt76: mt7925: ensure the 6GHz A-MPDU density cap from the hardware. wifi: mt76: mt7996: fix EMI rings for RRO wifi: mt76: mt7996: fix using wrong phy to start in mt7996_mac_restart() wifi: mt76: mt7996: fix MLO set key and group key issues wifi: mt76: mt7996: fix MLD group index assignment wifi: mt76: mt7996: use correct link_id when filling TXD and TXP ... ==================== Link: https://patch.msgid.link/20251127103806.17776-3-johannes@sipsolutions.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-28net: Remove KMSG_COMPONENT macroHeiko Carstens
The KMSG_COMPONENT macro is a leftover of the s390 specific "kernel message catalog" from 2008 [1] which never made it upstream. The macro was added to s390 code to allow for an out-of-tree patch which used this to generate unique message ids. Also this out-of-tree patch doesn't exist anymore. The pattern of how the KMSG_COMPONENT macro is used can also be found at some non s390 specific code, for whatever reasons. Besides adding an indirection it is unused. Remove the macro in order to get rid of a pointless indirection. Replace all users with the string it defines. In all cases this leads to a simple replacement like this: - #define KMSG_COMPONENT "af_iucv" - #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt + #define pr_fmt(fmt) "af_iucv: " fmt [1] https://lwn.net/Articles/292650/ Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Alexandra Winter <wintera@linux.ibm.com> Acked-by: Julian Anastasov <ja@ssi.bg> Acked-by: Sidraya Jayagond <sidraya@linux.ibm.com> Link: https://patch.msgid.link/20251126140705.1944278-1-hca@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-11-29ASoC: fsl_micfil: Set default quality and channelMark Brown
Merge series from Chancel Liu <chancel.liu@nxp.com>: Add default quality for different platforms. Set channel range control.