FreeBSD TCP SYN-FIN Kernel Tunable Assessment

by Elias Griffin, 01/19/2024

FreeBSD has a kernel tunable named net.inet.tcp.drop_synfin that will enable the dropping of TCP SYN+FIN packets, a known security mitigation. In order to test the effectiveness of this setting Quadhelion Engineering has performed:

  1. Probing/Scanning
  2. Simulated DoS attack
  3. Load testing

The test server is a fully patched FreeBSD 14.0 x64 on a Vultr Cloud Compute, High Frequency, Intel Xenon Single Core 3.2Ghz Server. Differents tests and parameters were experimented with. Upon selecting these test types with parameters show here, dozens of runs were used and fine-tuned and for breveity these results were picked to be the most indicative. We are testing SYN FIN the setting as implmented by FreeBSD and as it broadly affects FreeBSD to ensure that inclusion as a security measure does not unduly affect capability and performance where little security is gained. The first test is however, probing a target machine in a cyberattack reconnaissance phase using Nmap, the "classic" use.

SYN-FIN Probing

This was done without a firewall or any other security software except the Quadhelion Engineering harden-freebsd software, only toggling net.inet.tcp.drop_synfin = 1 or 0

SYN Probe

nmap 45.76.235.165 -sS

Result: Succcess

Starting Nmap 7.91 ( https://nmap.org ) at 2024-01-17 20:31 MST Nmap scan report for 45.76.235.165.vultrusercontent.com (45.76.235.165) Host is up (0.0018s latency). Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http

FIN Probe

nmap 45.76.235.165 -sF

Result: Failure

sendto in send_ip_packet_sd: sendto(4, packet, 40, 0, 45.76.235.165, 16) => Permission denied

Reference

nmap -sS --scanflags SYNFIN -T4 45.76.235.165

Result: Success

Starting Nmap 7.91 ( https://nmap.org ) at 2024-01-17 20:41 MST Nmap scan report for 45.76.235.165.vultrusercontent.com (45.76.235.165) Host is up (0.00100s latency). Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http

Nmap FIN SYNFIN Probe

nmap -sF --scanflags SYNFIN -T4 45.76.235.165

Result: False Positive

Starting Nmap 7.91 ( https://nmap.org ) at 2024-01-17 20:48 MST Nmap scan report for 45.76.235.165.vultrusercontent.com (45.76.235.165) Host is up (0.00077s latency). All 1000 scanned ports on 45.76.235.165.vultrusercontent.com (45.76.235.165) are open|filtered Nmap done: 1 IP address (1 host up) scanned in 25.01 seconds

nmap -sS --scanflags FIN -T4 45.76.235.165

Result: Failure

Starting Nmap 7.91 ( https://nmap.org ) at 2024-01-17 20:52 MST sendto in send_ip_packet_sd: sendto(4, packet, 40, 0, 45.76.235.165, 16) => Permission denied

Command confirmed with setting disabled tcpdump -i vtnet0 "tcp[tcpflags] & (tcp-syn & tcp-fin) != 0" -v

SYN FLOOD Results

Python SYNFLOOD

python3 py3_synflood_cmd.py -t 45.76.235.165 -p 80 -c 30000

While being flooded (small-medium as to not trigger my own DDoS protections) I pinged out to 9.9.9.9 on the sever being flooded to test overall network latency many times. This test was performed many times, these results are indicative.

SYNFIN accepted

210 packets transmitted, 210 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.756/1.147/48.024/3.129 ms

SYNFIN dropped

210 packets transmitted, 210 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.765/1.116/45.793/3.095 ms

Load Testing results

Using Loader.io a load test using the default NGINX configuration on Vultr FreeBSD 14.0 was performed using a load test of 0 clients going up to 10,000 clients in 5 minutes with constant request payloads to a static HTML page of 3MB.

SYN-FIN dropped (hardened), full interactive charted results from Loader.io

SYN-FYN allowed results (default)

SYN-FIN dropped results (hardened)

Server CPU, HDD Statistics(hardened test first)

Server Network Statistics (hardened test first)

Analysis

Probe/Scan

This case is not as straightforward as one would expect since the Nmap recommended settings produced a success which is the very thing this setting should work against. Data gathered through Wireshark and tcpdump/pcap were on the target machine was inconclusive as to why that is, many factors are at play, including other mitigations. Reframing the packets in the QHE recommended method did result in a failure as expected although more thorough testing is needed. However, overall there are more failures with net.inet.tcp.drop_synfin = 1 than net.inet.tcp.drop_synfin = 0.

  1. Positive impact

DoS

Athough it is a very small measure, nearing 1-2%, all tests were in this direction. If this flood was in the millions and distributed, I believe the scale of mitigation would also correlate.

  1. Positive impact

Performance under load

As you can see there is variance in the web metrics with more successful achieved without the setting enabled in some parameters, namely 20k more successful responses which makes sense. However, in all tests, the timeouts were less with the setting enabled in every test. In terms of security, this is a big gain, although in offset, it does seem to task the hardware a bit more as plateauing was seen. Considering this is a limited RAM and single core machine it is more pronounced.

  1. For Web Application Servers whose stack typically does not terminate anyway, enabling the setting is recommended.
  2. For High Security Applications Servers, enabling this setting is recommended

Conclusion

These results favor setting this kernel tunable that has broad effect, both in terms of performance and security.

  1. Not recommended for very high volume servers in which the unterminated, unsuccessful requests outweigh the mitigation or that have other defenses.
  2. Not recommended for server applications that greatly benefit, cannot re-allocate/connect or require 99.5% clean TCP FIN such as VoIP and Game Servers.