Bye Bye Wiki

1228786239000 » Tagged as: Python , web 0.8

I have been having a Wiki on this site at http://www.raditha.com/wiki/ for nearly four years but was finally forced to take it down because I couldn't keep pace with the spam bots. The last straw came when I ran into the following error while trying to clean up after the vandals.

Database error

A database query syntax error has occurred. This could be because of an illegal search query (see Searching Raditha.com), or it may indicate a bug in the software. The last attempted database query was:

INSERT INTO archive (ar_namespace,ar_title,ar_text,ar_comment,ar_user,ar_user_text,ar_timestamp,ar_minor_edit,ar_flags) SELECT cur_namespace,cur_title,cur_text,cur_comment,cur_user,cur_user_text,cur_timestamp,cur_minor_edit,0 FROM cur WHERE cur_namespace=3 AND cur_title='Alis'

from within function "Article::doDeleteArticle". MySQL returned error "1114: The table 'archive' is full".

There has been so much spam over the last few years that the MySQL table is full. I was tempted to clean up the database, update the mediawiki software used to power the wiki and start all over but then realized that it's a lot of hard work for little return. During the early days, there were a few usefull contributions to the wiki but later on they were inundated by spam. Nearly every legit contribution was accidentally deleted while cleaning up spam. Some pages would be hit even 40-50 times a minute (no wonder the database filled up). I tried combating it by using the mediawiki block list and IPtables. Entire subnets were filtered out to no avail. The spam kept increased and never showed signs of decreasing. Trouble might have been averted if I had kept mediawiki upto date but that didn't happen. Unlike most CMS style web applications it's not possible to 'skip versions with mediawiki'. Well it might be possible now but it wasn't always so. That means if you can't keep up with all their new releases, it becomes harder and hard to do the update. I gave up years ago. In the end I just relied on wget and a small python script that I wrote to convert all the wiki entries into PHP. There are only a few (too few really, yeah pathetically few if you would call a spade a spade) pages which means all the time and effort spent on it is just a waste.
comments powered by Disqus