summaryrefslogtreecommitdiff
path: root/Documentation/networking/netvsc.txt
blob: fa8d86356791b6292fbe973501dc9f6088447c7d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
Hyper-V network driver
======================

Compatibility
=============

This driver is compatible with Windows Server 2012 R2, 2016 and
Windows 10.

Features
========

  Checksum offload
  ----------------
  The netvsc driver supports checksum offload as long as the
  Hyper-V host version does. Windows Server 2016 and Azure
  support checksum offload for TCP and UDP for both IPv4 and
  IPv6. Windows Server 2012 only supports checksum offload for TCP.

  Receive Side Scaling
  --------------------
  Hyper-V supports receive side scaling. For TCP, packets are
  distributed among available queues based on IP address and port
  number.

  For UDP, we can switch UDP hash level between L3 and L4 by ethtool
  command. UDP over IPv4 and v6 can be set differently. The default
  hash level is L4. We currently only allow switching TX hash level
  from within the guests.

  On Azure, fragmented UDP packets have high loss rate with L4
  hashing. Using L3 hashing is recommended in this case.

  For example, for UDP over IPv4 on eth0:
  To include UDP port numbers in hasing:
        ethtool -N eth0 rx-flow-hash udp4 sdfn
  To exclude UDP port numbers in hasing:
        ethtool -N eth0 rx-flow-hash udp4 sd
  To show UDP hash level:
        ethtool -n eth0 rx-flow-hash udp4

  Generic Receive Offload, aka GRO
  --------------------------------
  The driver supports GRO and it is enabled by default. GRO coalesces
  like packets and significantly reduces CPU usage under heavy Rx
  load.

  SR-IOV support
  --------------
  Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
  is enabled in both the vSwitch and the guest configuration, then the
  Virtual Function (VF) device is passed to the guest as a PCI
  device. In this case, both a synthetic (netvsc) and VF device are
  visible in the guest OS and both NIC's have the same MAC address.

  The VF is enslaved by netvsc device.  The netvsc driver will transparently
  switch the data path to the VF when it is available and up.
  Network state (addresses, firewall, etc) should be applied only to the
  netvsc device; the slave device should not be accessed directly in
  most cases.  The exceptions are if some special queue discipline or
  flow direction is desired, these should be applied directly to the
  VF slave device.

  Receive Buffer
  --------------
  Packets are received into a receive area which is created when device
  is probed. The receive area is broken into MTU sized chunks and each may
  contain one or more packets. The number of receive sections may be changed
  via ethtool Rx ring parameters.

  There is a similar send buffer which is used to aggregate packets for sending.
  The send area is broken into chunks of 6144 bytes, each of section may
  contain one or more packets. The send buffer is an optimization, the driver
  will use slower method to handle very large packets or if the send buffer
  area is exhausted.