In order to cut down on variables that are set on the server, I attempt to turn off session management for spiders so that no session variables need to be created. I do this based on user agents and black-listed IP addresses. However, recently, I have been getting a slew of hits from what I assume are spiders that have regular user agents:
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
Since I can't use that, I thought I would black list the IP addresses, but it seems that the spider is sending a randomized remote address for each page request. The following IP addresses all came from some sort of crawler within two minutes:
188.8.131.52 (3 hits)
184.108.40.206 (2 hits)
220.127.116.11 (2 hits)
18.104.22.168 (2 hits)
22.214.171.124 (2 hits)
126.96.36.199 (3 hits)
I know that it was a crawler because they all had the same http referer, which was my home page and not all of the requested pages are available from the home page, which means the referer was being set manually. This is so irritating! Now, I have dozens upon dozens of sessions being created on the server that will last 20 minutes without being used twice. That is poor memory management.
Why is the spider doing this? I suppose this is to stop people from serving up different content based on spiders, but that is not my purpose. Having no session management does not server different content. It just turns off certain server-side tracking. Uggg.
Can you tell if any of those "sneaky" spiders over look for robots.txt?
I've always wanted to mess around with mod rewrite or something to funnel robots.txt through CF so I can better pin down requests coming from spiders.
Of course, that ASSUMES they even bother looking for robots.txt. If they change their IP address with every request, that would make it difficult too.
I have no idea. I assume they don't even bother looking at it??
188.8.131.52 thats my web
I usually don't bother to set a session until the user is authenticated or actually did something worth while to track, I do start sessions if an error occurs and once an error does occur I refer to the current and referring URL as well as store all variables being posted or submitted. So that I can better understand WTF the user is doing to cause the error. ( It could be my fault... but I prefer to blame the user )
As for the user agent being the same that is a little bit odd. It could be websites that are taking screenshots of your page --> ref http://www.browsershots.org or something similar?
It could also very well be an attempt to take down your site over load it.
I know I'm making this longer than what it should be but a user from a local network eg.) school, office, etc... could be using software that changes the ip each time they go to a new page... I have used a similar program when I was in school... ( changing my grades xD )
Well I have said enough so far... I think I will leave it at that. If I have left anything out just say something...
Hope I helped @ least 2%
I definitely like the idea of storing multiple values to compare against on subsequent page hits. I would, however, probably just store that in the COOKIE and the SESSION to compare against; that way I don't have to hit the database each time.