1

I am working on setting up some ICS honeypots for research so I need to be able to record the origin IP address of traffic I recieve.

I'm running the servers myself on prem but am behind a CGNAT/Double NAT on a 4G connection. I have set up port forwarding through a Wiregurad VPN tunnel to a Linux VPS to give an external IP address where I can open ports.

This works fine however because of the port forwarding all traffic recieved by the honeypots has the origin IP address of the VPS. As far as I can understand it's not possible to forward it with the origin IP address as there will be issues with the return traffic routing.

My question is what would be a good method to record the origin IP so that it could be matched up with the traffic recieved on the honeypot? I'm planning to capture all traffic at the honeypot, would it be plausible to also capture at the VPS and correlate the two somehow?

Thanks, Dave

Using the answer below here's the steps I took to get this working:

1: Removed the MASQUERADE line above to stop the source IP address being modified.

2: Add the policy based routing to the Honeypot side:

ip -4 route add default dev wg0 table 4242
ip -4 rule add pref 500 from x.x.x.2 lookup 4242

3: Change the AllowedIPs on the Wireguard configuration on the Honeypot side to direct all traffic for external IP addresses back through the tunnel. I used this website to calculate the correct configuration https://www.procustodibus.com/blog/2021/03/wireguard-allowedips-calculator/

One trap to watch out for is to make sure you exclude the IP address for the VPS as otherwise wireguard will try to direct the tunnel setup traffic through the not yet existing tunnel, which goes about as well as you might imagine!

Edit: I've added a diagram to help illustrate things. Both the VPS and the Honeypot host are Ubuntu machines connected directly through the tunnel. How owuld I go about using policy based routing to preserve the source IP address z.z.z.z once it reaches the Honeypot. Let's say y.y.y.y:44444 on the VPS is being forwarded to x.x.x.2:33333.

Network Diagram

Current IPTables rules used for forwarding (on VPS):

iptables -I FORWARD -d x.x.x.2 -p tcp --dport 33333 -j ACCEPT    
iptables -I FORWARD -s x.x.x.2 -p tcp --sport 33333 -j ACCEPT    
iptables -t nat -I PREROUTING -p tcp --dport 44444 -j DNAT --to-destination x.x.x.2:33333   
iptables -t nat -I POSTROUTINGq -d x.x.x.2 -o wg0 -j MASQUERADE
DaveM
  • 13

1 Answers1

0

It's best to not perform the address translation in the first place – often SNAT is the easiest solution to the "reply routing" problem, but not necessarily the only solution. It would be possible and better to configure the routing properly even if it takes more time than just slapping a SNAT rule on the server.

But if translation is unavoidable, then it's best to record the original address at the point where address translation happens:

With a Linux-based gateway, all NAT state (both inbound and outbound) – and in fact all per-connection state even if NAT isn't being used – is available in the "conntrack" subsystem, e.g. you can monitor new connections live using conntrack -E, or run the ulogd2 service if you want everything to be logged on disk (or a database).

Specifically, you want to set up ulogd2 with the inpflow_NFCT plugin for "stateful flow-based [logging] via nf_conntrack_netlink" (not the per-packet logging via NFLOG). For example, if you want a plaintext greppable log file:

plugin="/usr/lib/x86_64-linux-gnu/ulogd/ulogd_inpflow_NFCT.so"
(other plugins...)
plugin="/usr/lib/x86_64-linux-gnu/ulogd/ulogd_output_LOGEMU.so"
stack=ct1:NFCT,ip2str1:IP2STR,print1:PRINTFLOW,emu2:LOGEMU

[emu2] file="/var/log/ulog/ct.log"

With conntrack you won't need to correlate anything as the same state entry will have both the original and translated ("reply") addresses/ports, which is necessary for stateful NAT to work in the first place.

(You might also want to enable net.netfilter.nf_conntrack_acct in sysctl if you want the amounts of data transferred to be logged.)

As far as I can understand it's not possible to forward it with the origin IP address as there will be issues with the return traffic routing.

This is partially true – there are always issues with multihoming, but those issues can be dealt with, depending on the OS running on your on-premises gateways (or alternatively the on-premises servers themselves).

If your local gateway (the on-premises WireGuard endpoint) is also Linux-based, then "policy routing" is often used to achieve correct routing of return traffic to different upstreams. For example, in the simplest case (where the tunnel endpoint is also the destination host) policy routing can select routes according to the local IP address. For example:

  1. On the internal host, create a new routing table that routes everything through wg0 (this can be achieved using Table= if you are using wg-quick):

    ip -4 route add default dev wg0 table 4242
    ip -6 route add default dev wg0 table 4242
    
  2. Create a policy rule selecting this table for all replies that are about to be sent from the wg0 IP address:

    ip -4 rule add pref 500 from x.x.x.2 lookup 4242
    ip -6 rule add pref 500 from fdXX:XX::2 lookup 4242
    

When you have multiple hosts with a separate gateway acting as the WireGuard endpoint, the gateway could use packet marks (again relying on conntrack to correlate incoming and outgoing packets) to select one of several routing tables – packets arriving via WireGuard cause the flow to be marked in conntrack, and then packets arriving from LAN get routed back via WireGuard due to the mark.

(Similar features are available in BSD-based gateways using pf; from what I've heard it's even easier to implement such routing with pf compared to Linux.)

grawity
  • 501,077