VMware Workstation VMs Web Traffic Being Blocked
-
@dfinjr said in VMware Workstation VMs Web Traffic Being Blocked:
Did you mean the literal physical adapter on the host machine?
Yes I would start there. Make sure the replies from the external hosts are making it that far first. Since you only changed the Cisco for pfSense it seems something must be different. Though it's hard to see what!
-
@stephenw10
Sorry got held up with work, attempting the capture pieces now. -
Apologize that took me so long everyone. Had some work stuff hit me between the eyes and beyond that I wanted to get a good clean capture, filtering out most of the noise by ports that don't matter to this. I think I have a good capture. 2 files: test2 capture was done from the VM hosting system listening just for the IP address 172.16.0.202 and not capturing anything to do with 3389 and 52311 (BigFix Port) and file packet capture-5 is from pfsense listening on the interface where the system resides but filtering out 3389 and 52311 again. I haven't analyzed this myself yet but think it is the most solid one I've captured yet. With the filters I was able to capture a far longer. Captured till the browser on the VM gave up and spit out an "err_timed_out". Please let me know what you think. packetcapture-5.cap test2.pcapng.gz
-
Think I finally got a good capture. I've added it to the forum.
I'm doing my best with the VMware Workstation software. It isn't something I use outside of a means to an end and don't have any real professional experience with it. It hosts my VMs but I am far from an expert with it. Any educational points you can throw my way is of course appreciated.
-
Ah, it looks like on the test2 cap, on the VMWare host, you filtered by 172.16.0.202 as destination so we don't see any of the traffic 172.16.0.202 is sending.
But that does mean we can see the large packets leaving the pfSense LAN and arriving at the VMWare host. If you filter by, for example,ip.addr == 151.101.2.219
again you can see that.So given that those packets do not arrive at the actual VM it must be some issue with virtual networking dropping that.
It looks like none of the traffic leaving pfSense toward the VM host is flagged do-not-fragment. And it was previously. You might want to uncheck that setting in pfSense if it's still enabled because that should not be required.
Steve
-
@stephenw10
Thank you for the information Steve. I'll look at what I can do to alter the vmware configuration to allow the traffic to flow. Not sure what to do there but at least I can focus on it a bit more to see if I can get it to go.Question is, in your opinion, with knowing that this is the way that the traffic was flowing from the ASA before, do you think that the Cisco appliance was shaping the traffic or something so that it could flow?
-
@stephenw10
Also, this is the filter I did for the Wireshark. I did have the .202 IP but it was destination only:
-
If it's an MTU issue try testing that with large pings. You can just try pinging out from a VM to pfSense with do-not-fragment set and see where it fails. Or doesn't fail.
It seems like Cisco was reducing the path-mtu somehow. mss set perhaps? That might show in the cap we have from when it was in place...
-
Or maybe we don't have a pcap taken when the Cisco was routing?
-
@stephenw10
packetcapture-6.cap VMhostSpeedTest.pcapng VMSpeedTest.pcapngDoes this by chance show anything different. You're much faster on the analysis than I am. What I did is started a capture generic vlan on the pfsense, did a capture based on the vm host and the vm itself all fixated on speedtest.net.
I started with a ping and then attempted to browse via IP over 443.
-
Most of this missed in the pfSense capture because 1000 packets only covers 8s.
Still only seeing inbound traffic on the host but I wonder if that's a quirk of the bridge mode.
The VM itself still doesn't see any of the full sized packets.Is there a pcap from when the Cisco was routing and traffic was working?
-
@stephenw10
If it would be helpful for me to do a packet capture on the Cisco I would be happy to. Just let me know. Thank you again for all the time you're giving this. -
I'll get one for you now.
Are you happy with the same filters as the capture before this one or anything you'd like for me to change on the capture?
-
Those filters are fine. Just to test aganst speedtest you might try using, say,
151.101.0.0/16
instead. It would be nice to capture only that traffic.I'm signing off for tonight though, 2.30am here. I'll check back tomorrow
Steve
-
@stephenw10
Thank you Steve.
I created a capture but it was too big for the forum. You'll find them here:
https://drive.google.com/drive/folders/14C1MTTuwjUnvNYgDJfBy0gmiSQmO5HTQ?usp=sharingThanks for everything today!
-
@stephenw10
One last offer I was going to make was if you'd like I can also host a gotomeeting if you want to see any of this in real time. Just let me know if that would be interesting to you. Would give me a chance to thank you verbally. -
@dfinjr said in VMware Workstation VMs Web Traffic Being Blocked:
Think I finally got a good capture. I've added it to the forum.
Here I am, sorry but we're on European time, UTC - so I had to leave yesterday...
let's try a simple thing to check for possible problems with the hypervisor
please install a VirtualBox if you can, see what happens with that
by the way, yesterday I brought up the problem of to my vexpert forum friends, we'll see, maybe we'll get there...(?)
-
@daddygo
Thanks for bringing it up with your vexpert contacts. I can likely setup a downstream device in the same area of the network and put virtual box on it and see what it does. I'll let you know what I get. -
So I went and did a bigger test and I'm sold on the idea that the Cisco ASA was doing something to the packets to make them flow. I decided to try another router (TPLink ER605) just to see how it behaved with the VMS. Not exactly the same but very very close to the same results. Interestingly enough, the tplink was able to do sites such as netgate.com and speedtest.net (which it was able to execute the speedtest.net speedtest) but the .202 client is still failing to fully load some sites such as pfsense.org and gmail.
So the ASA had to be doing something to make it so the packets would survive because 2 other firewalls aren't doing whatever that ASA is doing to make it behave normally...
I'm going to hook back up the ASA and dig into anything I see there that would stand out as anything that might be doing this as nothing comes to mind.
It did prove to me though that the ASA was doing something special to the traffic and that the vmware vm traffic routing is definitely not playing a regular game with the packets in one form or another. My current best guess is that the packets were arriving to the VMware VMS/hosting system in such a way that it was able to pass the traffic already modified from the ASA through the VMWare logical networking.... Just gotta figure out what that is!
Color me confused but feeling like we're finally on the right track.
-
just keeping everyone up to speed. No idea why this would make a lick of difference but I hard set the MTU on the labsystems interface to 1500 MTU and now speedtest.net will load and run to full expectation. Now just gotta finish sorting out the last few sites. Heading the right direction.