A fairly bizarre issue come up recently that I thought would be amusing to share. It all started fairly simply. One of the development teams needed to test some website changes that required overriding some of the website files without making changes to the website itself. This is not difficult for a developer to do, but they wanted to set it up more simply for a number of testers to use. I suggested setting up a proxy server and configure it to proxy most requests on to their normal location. Then send requests for the files they wanted to test on to the test location. We would only need it for a week or two during the test phase.
I had a little free time, so I decided to set it up for them. I considered setting up an apache instance using mod_proxy and mod_rewrite. I decided instead to write something custom in Node.js. It took me about an hour to write a custom proxy server and add rules to detect requests for the test files and rewrite the request before proxying it. I turned it on, tested by changing my local hosts file to direct website requests to my proxy server, and all was good.
Of course this story can not stop there. I checked the log file from time to time to keep an eye on it. A few days later I noticed several requests and then error messages. And then my server crashed.
Error: socket hang up
Well I had to fix this. I scanned through the code, but didn’t see any issues. I did some quick research, but I wasn’t immediately sure that the issue was. The version of Node.js I had was old, so I upgraded to the latest version. Maybe there was a bug in the http library code that has since been fixed. Once again I tested the updated proxy server and let it run.
Again I kept an eye on the log file. A few days later there it was again. This time my error handling kept the server running. But it started logging error after error. And more requests kept coming in. Now I was annoyed.
GET / HTTP/1.1
I could not figure out where the requests were coming from since I had my server behind Network Address Translation. I could not easily tell the original IP of the requests. I made another minor code change and restarted the server. The problem went away, but so did the incoming requests. Hmm…. I was busy with other things and had to come back to it.
Later I sat down to look at it again. So what information did I have?
- An incoming request causes a socket hang up error.
- My own requests worked fine.
- The errors occurred after requests for a root page.
- I received many such requests until I restarted the server, then they went away.
- The Requests appeared to come in every few days.
- The requests did not appear to be from any of the developers or testers.
- Due to NAT I could not determine the origin of the requests.
I reviewed what I knew and pondered it a little. I could reconfigure my system to remove NAT from the equation, then I could get the IP address of the requests. That would take a little work that I didn’t want to do, and I would probably have to wait a few more days to get anything. And that is when the answer hit me. Where the requests were coming from and why they were breaking my server.
It was a security scanner. Somewhere a process was running inside the corporate network. It was doing port scanning and checking for http servers. Probably scanning for other things as well. It was sending the requests directly to my host. My server depended on the host header in the http requests to determine where to proxy the request to. My server was receiving these requests and proxying them to itself. Over and over again, until it barfed.
Well the fix was pretty easy then. I added code to restrict which host it would proxy for. Any request that did not have a host header for one of the allowed hosts would receive a 403 Forbidden response. I really should have done that in the first place.
So what is the moral of the story? Well there is one. When setting up a proxy server, even just for testing for a few days, be sure to protect it against abuse. You never now how someone is going to try to use it.