1. 19 Aug, 2006 1 commit
  2. 17 Aug, 2006 2 commits
    • Michael Chan's avatar
      [BNX2]: Convert to netdev_alloc_skb() · 932f3772
      Michael Chan authored
      
      Convert dev_alloc_skb() to netdev_alloc_skb() and increase default
      rx ring size to 255. The old ring size of 100 was too small.
      
      Update version to 1.4.44.
      Signed-off-by: default avatarMichael Chan <mchan@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      932f3772
    • Michael Chan's avatar
      [BNX2]: Fix tx race condition. · 2f8af120
      Michael Chan authored
      
      Fix a subtle race condition between bnx2_start_xmit() and bnx2_tx_int()
      similar to the one in tg3 discovered by Herbert Xu:
      
      CPU0					CPU1
      bnx2_start_xmit()
      	if (tx_ring_full) {
      		tx_lock
      					bnx2_tx()
      						if (!netif_queue_stopped)
      		netif_stop_queue()
      		if (!tx_ring_full)
      						update_tx_ring
      			netif_wake_queue()
      		tx_unlock
      	}
      
      Even though tx_ring is updated before the if statement in bnx2_tx_int() in
      program order, it can be re-ordered by the CPU as shown above.  This
      scenario can cause the tx queue to be stopped forever if bnx2_tx_int() has
      just freed up the entire tx_ring.  The possibility of this happening
      should be very rare though.
      
      The following changes are made, very much identical to the tg3 fix:
      
      1. Add memory barrier to fix the above race condition.
      
      2. Eliminate the private tx_lock altogether and rely solely on
      netif_tx_lock.  This eliminates one spinlock in bnx2_start_xmit()
      when the ring is full.
      
      3. Because of 2, use netif_tx_lock in bnx2_tx_int() before calling
      netif_wake_queue().
      
      4. Add memory barrier to bnx2_tx_avail().
      
      5. Add bp->tx_wake_thresh which is set to half the tx ring size.
      
      6. Check for the full wake queue condition before getting
      netif_tx_lock in tg3_tx().  This reduces the number of unnecessary
      spinlocks when the tx ring is full in a steady-state condition.
      Signed-off-by: default avatarMichael Chan <mchan@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2f8af120
  3. 08 Jul, 2006 1 commit
  4. 05 Jul, 2006 2 commits
  5. 02 Jul, 2006 1 commit
  6. 30 Jun, 2006 1 commit
  7. 29 Jun, 2006 3 commits
  8. 23 Jun, 2006 1 commit
    • Herbert Xu's avatar
      [NET]: Merge TSO/UFO fields in sk_buff · 7967168c
      Herbert Xu authored
      
      Having separate fields in sk_buff for TSO/UFO (tso_size/ufo_size) is not
      going to scale if we add any more segmentation methods (e.g., DCCP).  So
      let's merge them.
      
      They were used to tell the protocol of a packet.  This function has been
      subsumed by the new gso_type field.  This is essentially a set of netdev
      feature bits (shifted by 16 bits) that are required to process a specific
      skb.  As such it's easy to tell whether a given device can process a GSO
      skb: you just have to and the gso_type field and the netdev's features
      field.
      
      I've made gso_type a conjunction.  The idea is that you have a base type
      (e.g., SKB_GSO_TCPV4) that can be modified further to support new features.
      For example, if we add a hardware TSO type that supports ECN, they would
      declare NETIF_F_TSO | NETIF_F_TSO_ECN.  All TSO packets with CWR set would
      have a gso_type of SKB_GSO_TCPV4 | SKB_GSO_TCPV4_ECN while all other TSO
      packets would be SKB_GSO_TCPV4.  This means that only the CWR packets need
      to be emulated in software.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7967168c
  9. 18 Jun, 2006 7 commits
  10. 22 May, 2006 2 commits
  11. 12 Apr, 2006 1 commit
  12. 23 Mar, 2006 5 commits
  13. 20 Mar, 2006 7 commits
  14. 03 Mar, 2006 1 commit
  15. 23 Jan, 2006 5 commits