Age | Commit message (Collapse) | Author |
|
The header field is defined as u8[] but also accessed as struct
ieee80211_hdr. Enforce an alignment of 2 to prevent unnecessary
unaligned accesses, which can be very harmful for performance on many
platforms.
Fixes: e495c24731a2 ("mac80211: extend fast-xmit for more ciphers")
Cc: stable@vger.kernel.org
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
|
|
If the driver advertises the new HW flag USE_RSS, make the
station statistics on the fast-rx path per-CPU. This will
enable calling the RX in parallel, only hitting locking or
shared cachelines when the fast-RX path isn't available.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The regular RX path has a lot of code, but with a few
assumptions on the hardware it's possible to reduce the
amount of code significantly. Currently the assumptions
on the driver are the following:
* hardware/driver reordering buffer (if supporting aggregation)
* hardware/driver decryption & PN checking (if using encryption)
* hardware/driver did de-duplication
* hardware/driver did A-MSDU deaggregation
* AP_LINK_PS is used (in AP mode)
* no client powersave handling in mac80211 (in client mode)
of which some are actually checked per packet:
* de-duplication
* PN checking
* decryption
and additionally packets must
* not be A-MSDU (have been deaggregated by driver/device)
* be data packets
* not be fragmented
* be unicast
* have RFC 1042 header
Additionally dynamically we assume:
* no encryption or CCMP/GCMP, TKIP/WEP/other not allowed
* station must be authorized
* 4-addr format not enabled
Some data needed for the RX path is cached in a new per-station
"fast_rx" structure, so that we only need to look at this and
the packet, no other memory when processing packets on the fast
RX path.
After doing the above per-packet checks, the data path collapses
down to a pretty simple conversion function taking advantage of
the data cached in the small fast_rx struct.
This should speed up the RX processing, and will make it easier
to reason about parallelizing RX (for which statistics will need
to be per-CPU still.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
On 32-bit platforms, the 64-bit counters we keep need to be protected
to be consistently read. Use the u64_stats_sync mechanism to do that.
In order to not end up with overly long lines, refactor the tidstats
assignments a bit.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
When storing the last_rate_* values in the RX code, there's nothing
to guarantee consistency, so a concurrent reader could see, e.g.
last_rate_idx on the new value, but last_rate_flag still on the old,
getting completely bogus values in the end.
To fix this, I lifted the sta_stats_encode_rate() function from my
old rate statistics code, which encodes the entire rate data into a
single 16-bit value, avoiding the consistency issue.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Instead of touching the rx_stats.last_rx from the status path, introduce
and use a status_stats.last_ack variable. This will make rx_stats.last_rx
indicate when the last frame was received, making it available for real
"last_rx" and statistics gathering; statistics, when done per-CPU, will
need to figure out which place was updated last for those items where the
"last" value is exposed.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Move the averaged values out of rx_stats and into rx_stats_avg,
to cleanly split them out. The averaged ones cannot be supported
for parallel RX in a per-CPU fashion, while the other values can
be collected per CPU and then combined/selected when needed.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Avoid the really strange %s%s%s expression, use an array
of flag names and check that all flags are present.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Since the previous patch, the struct only has a single member,
so remove the struct and leave just the single member.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Remove unused variable in per STA debugfs structure, 'commit 34e895075e21
("mac80211: allow station add/remove to sleep")' removed the only user of
'add_has_run'.
Signed-off-by: Mohammed Shafi Shajakhan <mohammed@qti.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Commit 976bd9efdae6 ("mac80211: move beacon_loss_count into ifmgd")
removed the member from the sta_info struct but the description stayed
lingering. Remove it.
Signed-off-by: Luis de Bethencourt <luisbg@osg.samsung.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
If any frames are dropped that are part of a BA session, the reorder
buffer will "indefinitely" (until the timeout) wait for them to come
in (or a BAR moving the window) and won't release frames after them.
This means it isn't possible to filter frames within a BA session in
firmware.
Introduce an API function that allows such filtering. Calling this
function will move the BA window forward to the new SSN, and allows
marking frames after the SSN as having been filtered, so any future
reordering activity will release frames while skipping the holes.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Enable driver to manage the reordering logic itself.
This is needed for example for the iwlwifi driver that
will support hardware assisted reordering.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
add ieee80211_iter_keys_rcu() to iterate over uploaded
keys in atomic context (when rcu is locked)
The station removal code removes the keys only after
calling synchronize_net(), so it's not safe to iterate
the keys at this point (and postponing the actual key
deletion with call_rcu() might result in some
badly-ordered ops calls).
Add a flag to indicate a station is being removed,
and skip the configured keys if it's set.
Signed-off-by: Eliad Peller <eliadx.peller@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Group station statistics by where they're (mostly) updated
(TX, RX and TX-status) and group them into sub-structs of
the struct sta_info.
Also rename the variables since the grouping now makes it
obvious where they belong.
This makes it easier to identify where the statistics are
updated in the code, and thus easier to think about them.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
There's little point in keeping (and even sending to userspace)
the beacon_loss_count value per station, since it can only apply
to the AP on a managed-mode connection. Move the value to ifmgd,
advertise it only in managed mode, and remove it from ethtool as
it's available through better interfaces.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
This file only feeds a debugfs file that isn't very useful, so remove
it. If necessary, we can add other ways to get this information, for
example in the NL80211_CMD_PROBE_CLIENT response.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
There's only a single caller of this function, so it can
be moved to the same file and made static.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Advertise the capability to send A-MSDU within A-MPDU
in the AddBA request sent by mac80211. Let the driver
know about the peer's capabilities.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Instead of using the out-of-line average calculation, use the new
DECLARE_EWMA() macro to declare a signal EWMA, and use that.
This actually *reduces* the code size slightly (on x86-64) while
also reducing the station info size by 80 bytes.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
According to 802.11-2012 13.3.1, a mesh STA should assign an AID
upon receipt of a mesh peering open frame rather than using the link
id of the peer. Using the peer link id has two potential issues:
it may not be unique among the peers, and by its nature it is random,
so the TIM may not compress well.
In preparation for allocating it properly, use sta->sta.aid, but keep
the existing behavior of using the plid in the aid we send.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
This value is only used in mesh, so move it into the new mesh
sub-struct of the station info.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Define a station chandef, to be used for wider-bw TDLS peers. When both
peers support the feature, upgrade the channel bandwidth to the maximum
allowed by both peers and regulatory. Currently widths up to 80MHz are
supported in the 5GHz band.
When a TDLS peer connects/disconnects recalculate the channel type of the
current chanctx.
Make the chanctx width calculation consider wider-bw TDLS peers and
similarly fix the max_required_bw calculation for the chanctx min_def.
Since the sta->bandwidth is calculated only later on, take
bss_conf.chandef.width as the minimal width for station interface.
Set the upgraded channel width in the VHT-operation set during TDLS setup.
Signed-off-by: Arik Nemtsov <arikx.nemtsov@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Allow a device to specify support for the TDLS wider-bandwidth feature.
Indicate this support during TDLS setup in the ext-capab IE and set an
appropriate station flag when our TDLS peer supports it.
This feature gives TDLS peers the ability to use a wider channel than
the base width of the BSS. For instance VHT capable TDLS peers connected
on a 20MHz channel can extend the channel to 80MHz, if regulatory
considerations allow it.
Do not cap the bandwidth of such stations by the current BSS channel width
in mac80211.
Signed-off-by: Arik Nemtsov <arikx.nemtsov@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
There are now a fairly large number of mesh fields that really
aren't needed in any other modes; move those into their own
structure and allocate them separately.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Currently, the station hash table lookup (or iteration) must
access two cachelines for each station - the one with the hash
table node, and the one with the MAC address.
Duplicate the MAC address next to the hash node to get rid of
this. Since the MAC address is static there's no consistency
problem introduced by this.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
There are no RX queues in mac80211 (yet), the comment should refer
to the TID (including one slot for non-QoS) rather than 'RX queue'.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
This struct member is only assigned, never used otherwise;
remove it.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
In mesh mode there is a race between establishing links and processing
rates and capabilities in beacons. This is very noticeable with slow
beacons (e.g. beacon intervals of 1s) and manifested for us as stations
using minstrel when minstrel_ht should be used. Fixed by changing
mesh_sta_info_init so that it always checks rates and such if it has not
already done so.
Signed-off-by: Alexis Green <agreen@cococorp.com>
CC: Jesse Jones <jjones@cococorp.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
This was missed in the previous patch, add some documentation
for rate_ctrl_lock to avoid docbook warnings.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
This counter is unsafe with concurrent TX and is only exposed
through debugfs and ethtool. Instead of trying to fix it just
remove it for now, if it's really needed then it should be
exposed through nl80211 and in a way that drivers that do the
fragmentation in the device could support it as well.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
When crypto is offloaded then in some cases it's all handled
by the device, and in others only some space for the IV must
be reserved in the frame. Handle both of these cases in the
fast-xmit path, up to a limit of 18 bytes of space for IVs.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
In order to speed up mac80211's TX path, add the "fast-xmit" cache
that will cache the data frame 802.11 header and other data to be
able to build the frame more quickly. This cache is rebuilt when
external triggers imply changes, but a lot of the checks done per
packet today are simplified away to the check for the cache.
There's also a more detailed description in the code.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Both minstrel (reported by Sven Eckelmann) and the iwlwifi rate
control aren't properly taking concurrency into account. It's
likely that the same is true for other rate control algorithms.
In the case of minstrel this manifests itself in crashes when an
update and other data access are run concurrently, for example
when the stations change bandwidth or similar. In iwlwifi, this
can cause firmware crashes.
Since fixing all rate control algorithms will be very difficult,
just provide locking for invocations. This protects the internal
data structures the algorithms maintain.
I've manipulated hostapd to test this, by having it change its
advertised bandwidth roughly ever 150ms. At the same time, I'm
running a flood ping between the client and the AP, which causes
this race of update vs. get_rate/status to easily happen on the
client. With this change, the system survives this test.
Reported-by: Sven Eckelmann <sven@open-mesh.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The mesh plink code uses sta->lock to serialize access to the
plink state fields between the peer link state machine and the
peer link timer. Some paths (e.g. those involving
mps_qos_null_tx()) unfortunately hold this spinlock across
frame tx, which is soon to be disallowed. Add a new spinlock
just for plink access.
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next
Johannes Berg says:
====================
There isn't much left, but we have
* new mac80211 internal software queue to allow drivers to have
shorter hardware queues and pull on-demand
* use rhashtable for mac80211 station table
* minstrel rate control debug improvements and some refactoring
* fix noisy message about TX power reduction
* fix continuous message printing and activity if CRDA doesn't respond
* fix VHT-related capabilities with "iw connect" or "iwconfig ..."
* fix Kconfig for cfg80211 wireless extensions compatibility
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Conflicts:
drivers/net/usb/asix_common.c
drivers/net/usb/sr9800.c
drivers/net/usb/usbnet.c
include/linux/usb/usbnet.h
net/ipv4/tcp_ipv4.c
net/ipv6/tcp_ipv6.c
The TCP conflicts were overlapping changes. In 'net' we added a
READ_ONCE() to the socket cached RX route read, whilst in 'net-next'
Eric Dumazet touched the surrounding code dealing with how mini
sockets are handled.
With USB, it's a case of the same bug fix first going into net-next
and then I cherry picked it back into net.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This allows drivers to request per-vif and per-sta-tid queues from which
they can pull frames. This makes it easier to keep the hardware queues
short, and to improve fairness between clients and vifs.
The task of scheduling packet transmission is left up to the driver -
queueing is controlled by mac80211. Drivers can only dequeue packets by
calling ieee80211_tx_dequeue. This makes it possible to add active queue
management later without changing drivers using this code.
This can also be used as a starting point to implement A-MSDU
aggregation in a way that does not add artificially induced latency.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
[resolved minor context conflict, minor changes, endian annotations]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
There's an issue with the way the RX A-MPDU reorder timer is
deleted that can cause a kernel crash like this:
* tid_rx is removed - call_rcu(ieee80211_free_tid_rx)
* station is destroyed
* reorder timer fires before ieee80211_free_tid_rx() runs,
accessing the station, thus potentially crashing due to
the use-after-free
The station deletion is protected by synchronize_net(), but
that isn't enough -- ieee80211_free_tid_rx() need not have
run when that returns (it deletes the timer.) We could use
rcu_barrier() instead of synchronize_net(), but that's much
more expensive.
Instead, to fix this, add a field tracking that the session
is being deleted. In this case, the only re-arming of the
timer happens with the reorder spinlock held, so make that
code not rearm it if the session is being deleted and also
delete the timer after setting that field. This ensures the
timer cannot fire after ___ieee80211_stop_rx_ba_session()
returns, which fixes the problem.
Cc: stable@vger.kernel.org
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
We currently have a hand-rolled table with 256 entries and are
using the last byte of the MAC address as the hash. This hash
is obviously very fast, but collisions are easily created and
we waste a lot of space in the common case of just connecting
as a client to an AP where we just have a single station. The
other common case of an AP is also suboptimal due to the size
of the hash table and the ease of causing collisions.
Convert all of this to use rhashtable with jhash, which gives
us the advantage of a far better hash function (with random
perturbation to avoid hash collision attacks) and of course
that the hash table grows and shrinks dynamically with chain
length, improving both cases above.
Use a specialised hash function (using jhash, but with fixed
length) to achieve better compiler optimisation as suggested
by Sergey Ryazanov.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Revert commit ad38bfc916da ("mac80211: Tx frame latency statistics")
(along with some follow-up fixes).
This code turned out not to be as useful in the current form as we
thought, and we've internally hacked it up more, but that's not
very suitable for upstream (for now), and we might just do that
with tracing instead.
Therefore, for now at least, remove this code. We might also need
to use the skb->tstamp field for the TCP performance issue, which
is more important than the debugging.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Implement the new counters cfg80211 can now advertise to userspace.
The TX code is in the sequence number handler, which is a bit odd,
but that place already knows the TID and frame type, so it was
easiest and least impact there.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
In TDLS (e.g., TDLS off-channel) there is a requirement for
some drivers to supply an unused TID between the AP and the
device to the FW, to allow sending PTI requests and to allow
the FW to aggregate on a specific TID for better throughput.
To ensure that the allocated TID is indeed unused, this patch
introduces an API for blocking the driver from TXing on that
TID.
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Implement the cfg80211 TDLS channel switch ops and introduce new mac80211
ones for low-level drivers.
Verify low-level driver support for the new ops when using the relevant
wiphy feature bit. Also verify the peer supports channel switching before
passing the command down.
Add a new STA flag to track the off-channel state with the TDLS peer and
make sure to cancel the channel-switch if the peer STA is unexpectedly
removed.
Signed-off-by: Arik Nemtsov <arikx.nemtsov@intel.com>
Signed-off-by: Arik Nemtsov <arik@wizery.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The AP or peer can prohibit TDLS channel switch via a bit in the
extended capabilities IE. Parse the IE and track this bit. Set an
appropriate STA flag if both the AP and peer STA support TDLS
channel-switching.
Add the new STA flag and the missing TDLS_INITIATOR to debugfs.
Signed-off-by: Arik Nemtsov <arikx.nemtsov@intel.com>
Signed-off-by: Arik Nemtsov <arik@wizery.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Forgot to add an entry to the struct description of sta_info.
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Adding a timeout for tearing down a TDLS connection that
hasn't had ACKed traffic sent through it for a certain
amount of time.
Since we have no other monitoring facility to indicate the
existance (or non-existance) of a peer, this patch will
cause a peer to be considered as unavailable if for some X
time at least some Y packets have all not been ACKed.
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Our legal structure changed at some point (see wikipedia), but
we forgot to immediately switch over to the new copyright
notice.
For files that we have modified in the time since the change,
add the proper copyright notice now.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
When starting an offloaded BA session it is
unknown what starting sequence number should be
used. Using last_seq worked in most cases except
after hw restart.
When hw restart is requested last_seq is
(rightfully so) kept unmodified. This ended up
with BA sessions being restarted with an aribtrary
BA window values resulting in dropped frames until
sequence numbers caught up.
Instead of last_seq pick seqno of a first Rxed
frame of a given BA session.
This fixes stalled traffic after hw restart with
offloaded BA sessions (currently only ath10k).
Signed-off-by: Michal Kazior <michal.kazior@tieto.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|