# Fixed an embarrassing security bug...

Earlier tonight I started getting some error emails as a result of some requests from 5.188.62.214 who started POSTing to /admin/post/new, which as revealed in my notes is the URL this blog uses when I submit a new post. I knew I had to deal with this immediately because the error message was PHP aborting because of an undefined index in a map -- the bot using that IP wasn't sending the expected new post form parameters.

This is odd to me since the HTML form is part of the response payload and all the needed inputs are in there, including one that the bot did send. That one ended up being the only type="hidden" one, too, which makes it more bizarre. It reminds me of some people filtering requests from dumb bots by setting a hidden input and then setting it to something else with JS, or unsetting it, and catching bots that aren't JS-aware. Apparently many still aren't. The bot also sent some parameters from other forms on the page, like the hidden sitesearch param for the google search box, and the pw param for the login form, with guesses as brilliant as 123456, admin, admin123, and gw44444 (what's that from? too many 4s...) -- with variants gw111111 and gw66666666.

But all that's beside the point, because what is an unauthenticated request doing getting this far into the handler of an admin page, which is supposed to be restricted to admin users (i.e. me)? And after a bit of investigation I found to my horror that a plain curl on that URL would serve the form...

What a huge blunder. In this case if the bot had actually used the form with all the JS processing involved, they wouldn't have been able to make a new post anyway, because the handler would have errored out at not finding a user id for the author, but still, there are other admin URLs that don't check for such things and could lead to editing or deleting things, or they could have registered an account and then done some annoying things. (Catastrophic, if they figured out how to upload files.)

What's the cause? Ultimately it goes back to some questionable architecture from the site's very beginning in 2009, but it was introduced when I added page caching in 2012. Oops. It's a good thing no one reads this blog, and that I don't have motivated enemies...

On the architecture side, as described in the earlier rewrite notes post, a request URL gets mapped to a Service class. Prior to adding the cache, the code in the index.php file was literally just $handler = new$serviceClass(); -- with $handler then not being used, of course. Me_2009 wasn't the cleanest coder. With the cache, it turned into a call into the caching class, Cacher::handle_service($serviceClass).

What let me get away in the beginning with just a call to the constructor is that each of my service classes have an inheritance hierarchy to a BaseService class, and they call their parent constructor, so the base class is where the work is normally done to map the URL to a function, call it, and render/serve any output. Since some services ended up needing to call methods of other services (poor factoring), every service's constructor also takes an optional parameter (default false) for whether it's an "internal" construction or not. If it's an "internal" construction, then the service class's constructor skips calling the parent constructor and no work apart from the url-pattern-to-function definition is done.

The caching class instantiated the Service class with the "internal" parameter set to true, so by default nothing would happen. Then it called the class's URL-to-function mapper (needing the function data for its cache key, and the decision of whether a request was even cacheable or not) and could then do the logic of if-cacheable { if-not-cached { render normally, cache it } else { serve from cache }} else { render normally }.